Jan 21 13:04:40 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 21 13:04:40 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 21 13:04:40 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 13:04:40 localhost kernel: BIOS-provided physical RAM map:
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 21 13:04:40 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 21 13:04:40 localhost kernel: NX (Execute Disable) protection: active
Jan 21 13:04:40 localhost kernel: APIC: Static calls initialized
Jan 21 13:04:40 localhost kernel: SMBIOS 2.8 present.
Jan 21 13:04:40 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 21 13:04:40 localhost kernel: Hypervisor detected: KVM
Jan 21 13:04:40 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 21 13:04:40 localhost kernel: kvm-clock: using sched offset of 3402970076 cycles
Jan 21 13:04:40 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 21 13:04:40 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 21 13:04:40 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 21 13:04:40 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 21 13:04:40 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 21 13:04:40 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 21 13:04:40 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 21 13:04:40 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 21 13:04:40 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 21 13:04:40 localhost kernel: Using GB pages for direct mapping
Jan 21 13:04:40 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 21 13:04:40 localhost kernel: ACPI: Early table checksum verification disabled
Jan 21 13:04:40 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 21 13:04:40 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 13:04:40 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 13:04:40 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 13:04:40 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 21 13:04:40 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 13:04:40 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 21 13:04:40 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 21 13:04:40 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 21 13:04:40 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 21 13:04:40 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 21 13:04:40 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 21 13:04:40 localhost kernel: No NUMA configuration found
Jan 21 13:04:40 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 21 13:04:40 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 21 13:04:40 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 21 13:04:40 localhost kernel: Zone ranges:
Jan 21 13:04:40 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 21 13:04:40 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 21 13:04:40 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 13:04:40 localhost kernel:   Device   empty
Jan 21 13:04:40 localhost kernel: Movable zone start for each node
Jan 21 13:04:40 localhost kernel: Early memory node ranges
Jan 21 13:04:40 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 21 13:04:40 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 21 13:04:40 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 21 13:04:40 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 21 13:04:40 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 21 13:04:40 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 21 13:04:40 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 21 13:04:40 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 21 13:04:40 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 21 13:04:40 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 21 13:04:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 21 13:04:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 21 13:04:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 21 13:04:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 21 13:04:40 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 21 13:04:40 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 21 13:04:40 localhost kernel: TSC deadline timer available
Jan 21 13:04:40 localhost kernel: CPU topo: Max. logical packages:   8
Jan 21 13:04:40 localhost kernel: CPU topo: Max. logical dies:       8
Jan 21 13:04:40 localhost kernel: CPU topo: Max. dies per package:   1
Jan 21 13:04:40 localhost kernel: CPU topo: Max. threads per core:   1
Jan 21 13:04:40 localhost kernel: CPU topo: Num. cores per package:     1
Jan 21 13:04:40 localhost kernel: CPU topo: Num. threads per package:   1
Jan 21 13:04:40 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 21 13:04:40 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 21 13:04:40 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 21 13:04:40 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 21 13:04:40 localhost kernel: Booting paravirtualized kernel on KVM
Jan 21 13:04:40 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 21 13:04:40 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 21 13:04:40 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 21 13:04:40 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 21 13:04:40 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 21 13:04:40 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 21 13:04:40 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 13:04:40 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 21 13:04:40 localhost kernel: random: crng init done
Jan 21 13:04:40 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 21 13:04:40 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 21 13:04:40 localhost kernel: Fallback order for Node 0: 0 
Jan 21 13:04:40 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 21 13:04:40 localhost kernel: Policy zone: Normal
Jan 21 13:04:40 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 21 13:04:40 localhost kernel: software IO TLB: area num 8.
Jan 21 13:04:40 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 21 13:04:40 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 21 13:04:40 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 21 13:04:40 localhost kernel: Dynamic Preempt: voluntary
Jan 21 13:04:40 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 21 13:04:40 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 21 13:04:40 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 21 13:04:40 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 21 13:04:40 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 21 13:04:40 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 21 13:04:40 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 21 13:04:40 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 21 13:04:40 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 13:04:40 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 13:04:40 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 21 13:04:40 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 21 13:04:40 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 21 13:04:40 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 21 13:04:40 localhost kernel: Console: colour VGA+ 80x25
Jan 21 13:04:40 localhost kernel: printk: console [ttyS0] enabled
Jan 21 13:04:40 localhost kernel: ACPI: Core revision 20230331
Jan 21 13:04:40 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 21 13:04:40 localhost kernel: x2apic enabled
Jan 21 13:04:40 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 21 13:04:40 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 21 13:04:40 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 21 13:04:40 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 21 13:04:40 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 21 13:04:40 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 21 13:04:40 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 21 13:04:40 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 21 13:04:40 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 21 13:04:40 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 21 13:04:40 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 21 13:04:40 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 21 13:04:40 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 21 13:04:40 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 21 13:04:40 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 21 13:04:40 localhost kernel: x86/bugs: return thunk changed
Jan 21 13:04:40 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 21 13:04:40 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 21 13:04:40 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 21 13:04:40 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 21 13:04:40 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 21 13:04:40 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 21 13:04:40 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 21 13:04:40 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 21 13:04:40 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 21 13:04:40 localhost kernel: landlock: Up and running.
Jan 21 13:04:40 localhost kernel: Yama: becoming mindful.
Jan 21 13:04:40 localhost kernel: SELinux:  Initializing.
Jan 21 13:04:40 localhost kernel: LSM support for eBPF active
Jan 21 13:04:40 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 13:04:40 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 21 13:04:40 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 21 13:04:40 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 21 13:04:40 localhost kernel: ... version:                0
Jan 21 13:04:40 localhost kernel: ... bit width:              48
Jan 21 13:04:40 localhost kernel: ... generic registers:      6
Jan 21 13:04:40 localhost kernel: ... value mask:             0000ffffffffffff
Jan 21 13:04:40 localhost kernel: ... max period:             00007fffffffffff
Jan 21 13:04:40 localhost kernel: ... fixed-purpose events:   0
Jan 21 13:04:40 localhost kernel: ... event mask:             000000000000003f
Jan 21 13:04:40 localhost kernel: signal: max sigframe size: 1776
Jan 21 13:04:40 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 21 13:04:40 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 21 13:04:40 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 21 13:04:40 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 21 13:04:40 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 21 13:04:40 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 21 13:04:40 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 21 13:04:40 localhost kernel: node 0 deferred pages initialised in 11ms
Jan 21 13:04:40 localhost kernel: Memory: 7763824K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 21 13:04:40 localhost kernel: devtmpfs: initialized
Jan 21 13:04:40 localhost kernel: x86/mm: Memory block size: 128MB
Jan 21 13:04:40 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 21 13:04:40 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 21 13:04:40 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 21 13:04:40 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 21 13:04:40 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 21 13:04:40 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 21 13:04:40 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 21 13:04:40 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 21 13:04:40 localhost kernel: audit: type=2000 audit(1769000679.029:1): state=initialized audit_enabled=0 res=1
Jan 21 13:04:40 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 21 13:04:40 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 21 13:04:40 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 21 13:04:40 localhost kernel: cpuidle: using governor menu
Jan 21 13:04:40 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 21 13:04:40 localhost kernel: PCI: Using configuration type 1 for base access
Jan 21 13:04:40 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 21 13:04:40 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 21 13:04:40 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 21 13:04:40 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 21 13:04:40 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 21 13:04:40 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 21 13:04:40 localhost kernel: Demotion targets for Node 0: null
Jan 21 13:04:40 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 21 13:04:40 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 21 13:04:40 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 21 13:04:40 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 21 13:04:40 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 21 13:04:40 localhost kernel: ACPI: Interpreter enabled
Jan 21 13:04:40 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 21 13:04:40 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 21 13:04:40 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 21 13:04:40 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 21 13:04:40 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 21 13:04:40 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 21 13:04:40 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [3] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [4] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [5] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [6] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [7] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [8] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [9] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [10] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [11] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [12] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [13] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [14] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [15] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [16] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [17] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [18] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [19] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [20] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [21] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [22] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [23] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [24] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [25] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [26] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [27] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [28] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [29] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [30] registered
Jan 21 13:04:40 localhost kernel: acpiphp: Slot [31] registered
Jan 21 13:04:40 localhost kernel: PCI host bridge to bus 0000:00
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 21 13:04:40 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 21 13:04:40 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 21 13:04:40 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 21 13:04:40 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 21 13:04:40 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 21 13:04:40 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 21 13:04:40 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 21 13:04:40 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 21 13:04:40 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 21 13:04:40 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 21 13:04:40 localhost kernel: iommu: Default domain type: Translated
Jan 21 13:04:40 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 21 13:04:40 localhost kernel: SCSI subsystem initialized
Jan 21 13:04:40 localhost kernel: ACPI: bus type USB registered
Jan 21 13:04:40 localhost kernel: usbcore: registered new interface driver usbfs
Jan 21 13:04:40 localhost kernel: usbcore: registered new interface driver hub
Jan 21 13:04:40 localhost kernel: usbcore: registered new device driver usb
Jan 21 13:04:40 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 21 13:04:40 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 21 13:04:40 localhost kernel: PTP clock support registered
Jan 21 13:04:40 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 21 13:04:40 localhost kernel: NetLabel: Initializing
Jan 21 13:04:40 localhost kernel: NetLabel:  domain hash size = 128
Jan 21 13:04:40 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 21 13:04:40 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 21 13:04:40 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 21 13:04:40 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 21 13:04:40 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 21 13:04:40 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 21 13:04:40 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 21 13:04:40 localhost kernel: vgaarb: loaded
Jan 21 13:04:40 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 21 13:04:40 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 21 13:04:40 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 21 13:04:40 localhost kernel: pnp: PnP ACPI init
Jan 21 13:04:40 localhost kernel: pnp 00:03: [dma 2]
Jan 21 13:04:40 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 21 13:04:40 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 21 13:04:40 localhost kernel: NET: Registered PF_INET protocol family
Jan 21 13:04:40 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 21 13:04:40 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 21 13:04:40 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 21 13:04:40 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 21 13:04:40 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 21 13:04:40 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 21 13:04:40 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 21 13:04:40 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 13:04:40 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 21 13:04:40 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 21 13:04:40 localhost kernel: NET: Registered PF_XDP protocol family
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 21 13:04:40 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 21 13:04:40 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 21 13:04:40 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 21 13:04:40 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 117970 usecs
Jan 21 13:04:40 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 21 13:04:40 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 21 13:04:40 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 21 13:04:40 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 21 13:04:40 localhost kernel: ACPI: bus type thunderbolt registered
Jan 21 13:04:40 localhost kernel: Initialise system trusted keyrings
Jan 21 13:04:40 localhost kernel: Key type blacklist registered
Jan 21 13:04:40 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 21 13:04:40 localhost kernel: zbud: loaded
Jan 21 13:04:40 localhost kernel: integrity: Platform Keyring initialized
Jan 21 13:04:40 localhost kernel: integrity: Machine keyring initialized
Jan 21 13:04:40 localhost kernel: Freeing initrd memory: 87956K
Jan 21 13:04:40 localhost kernel: NET: Registered PF_ALG protocol family
Jan 21 13:04:40 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 21 13:04:40 localhost kernel: Key type asymmetric registered
Jan 21 13:04:40 localhost kernel: Asymmetric key parser 'x509' registered
Jan 21 13:04:40 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 21 13:04:40 localhost kernel: io scheduler mq-deadline registered
Jan 21 13:04:40 localhost kernel: io scheduler kyber registered
Jan 21 13:04:40 localhost kernel: io scheduler bfq registered
Jan 21 13:04:40 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 21 13:04:40 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 21 13:04:40 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 21 13:04:40 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 21 13:04:40 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 21 13:04:40 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 21 13:04:40 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 21 13:04:40 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 21 13:04:40 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 21 13:04:40 localhost kernel: Non-volatile memory driver v1.3
Jan 21 13:04:40 localhost kernel: rdac: device handler registered
Jan 21 13:04:40 localhost kernel: hp_sw: device handler registered
Jan 21 13:04:40 localhost kernel: emc: device handler registered
Jan 21 13:04:40 localhost kernel: alua: device handler registered
Jan 21 13:04:40 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 21 13:04:40 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 21 13:04:40 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 21 13:04:40 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 21 13:04:40 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 21 13:04:40 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 21 13:04:40 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 21 13:04:40 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 21 13:04:40 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 21 13:04:40 localhost kernel: hub 1-0:1.0: USB hub found
Jan 21 13:04:40 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 21 13:04:40 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 21 13:04:40 localhost kernel: usbserial: USB Serial support registered for generic
Jan 21 13:04:40 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 21 13:04:40 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 21 13:04:40 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 21 13:04:40 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 21 13:04:40 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 21 13:04:40 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 21 13:04:40 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 21 13:04:40 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-21T13:04:39 UTC (1769000679)
Jan 21 13:04:40 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 21 13:04:40 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 21 13:04:40 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 21 13:04:40 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 21 13:04:40 localhost kernel: usbcore: registered new interface driver usbhid
Jan 21 13:04:40 localhost kernel: usbhid: USB HID core driver
Jan 21 13:04:40 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 21 13:04:40 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 21 13:04:40 localhost kernel: Initializing XFRM netlink socket
Jan 21 13:04:40 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 21 13:04:40 localhost kernel: Segment Routing with IPv6
Jan 21 13:04:40 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 21 13:04:40 localhost kernel: mpls_gso: MPLS GSO support
Jan 21 13:04:40 localhost kernel: IPI shorthand broadcast: enabled
Jan 21 13:04:40 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 21 13:04:40 localhost kernel: AES CTR mode by8 optimization enabled
Jan 21 13:04:40 localhost kernel: sched_clock: Marking stable (1648007109, 151456849)->(1952343655, -152879697)
Jan 21 13:04:40 localhost kernel: registered taskstats version 1
Jan 21 13:04:40 localhost kernel: Loading compiled-in X.509 certificates
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 21 13:04:40 localhost kernel: Demotion targets for Node 0: null
Jan 21 13:04:40 localhost kernel: page_owner is disabled
Jan 21 13:04:40 localhost kernel: Key type .fscrypt registered
Jan 21 13:04:40 localhost kernel: Key type fscrypt-provisioning registered
Jan 21 13:04:40 localhost kernel: Key type big_key registered
Jan 21 13:04:40 localhost kernel: Key type encrypted registered
Jan 21 13:04:40 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 21 13:04:40 localhost kernel: Loading compiled-in module X.509 certificates
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 21 13:04:40 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 21 13:04:40 localhost kernel: ima: No architecture policies found
Jan 21 13:04:40 localhost kernel: evm: Initialising EVM extended attributes:
Jan 21 13:04:40 localhost kernel: evm: security.selinux
Jan 21 13:04:40 localhost kernel: evm: security.SMACK64 (disabled)
Jan 21 13:04:40 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 21 13:04:40 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 21 13:04:40 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 21 13:04:40 localhost kernel: evm: security.apparmor (disabled)
Jan 21 13:04:40 localhost kernel: evm: security.ima
Jan 21 13:04:40 localhost kernel: evm: security.capability
Jan 21 13:04:40 localhost kernel: evm: HMAC attrs: 0x1
Jan 21 13:04:40 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 21 13:04:40 localhost kernel: Running certificate verification RSA selftest
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 21 13:04:40 localhost kernel: Running certificate verification ECDSA selftest
Jan 21 13:04:40 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 21 13:04:40 localhost kernel: clk: Disabling unused clocks
Jan 21 13:04:40 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 21 13:04:40 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 21 13:04:40 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 21 13:04:40 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 21 13:04:40 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 21 13:04:40 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 21 13:04:40 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 21 13:04:40 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 21 13:04:40 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 21 13:04:40 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 21 13:04:40 localhost kernel: Run /init as init process
Jan 21 13:04:40 localhost kernel:   with arguments:
Jan 21 13:04:40 localhost kernel:     /init
Jan 21 13:04:40 localhost kernel:   with environment:
Jan 21 13:04:40 localhost kernel:     HOME=/
Jan 21 13:04:40 localhost kernel:     TERM=linux
Jan 21 13:04:40 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 21 13:04:40 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 21 13:04:40 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 21 13:04:40 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 13:04:40 localhost systemd[1]: Detected virtualization kvm.
Jan 21 13:04:40 localhost systemd[1]: Detected architecture x86-64.
Jan 21 13:04:40 localhost systemd[1]: Running in initrd.
Jan 21 13:04:40 localhost systemd[1]: No hostname configured, using default hostname.
Jan 21 13:04:40 localhost systemd[1]: Hostname set to <localhost>.
Jan 21 13:04:40 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 21 13:04:40 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 21 13:04:40 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 13:04:40 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 21 13:04:40 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 21 13:04:40 localhost systemd[1]: Reached target Local File Systems.
Jan 21 13:04:40 localhost systemd[1]: Reached target Path Units.
Jan 21 13:04:40 localhost systemd[1]: Reached target Slice Units.
Jan 21 13:04:40 localhost systemd[1]: Reached target Swaps.
Jan 21 13:04:40 localhost systemd[1]: Reached target Timer Units.
Jan 21 13:04:40 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 21 13:04:40 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 21 13:04:40 localhost systemd[1]: Listening on Journal Socket.
Jan 21 13:04:40 localhost systemd[1]: Listening on udev Control Socket.
Jan 21 13:04:40 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 21 13:04:40 localhost systemd[1]: Reached target Socket Units.
Jan 21 13:04:40 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 21 13:04:40 localhost systemd[1]: Starting Journal Service...
Jan 21 13:04:40 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 13:04:40 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 21 13:04:40 localhost systemd[1]: Starting Create System Users...
Jan 21 13:04:40 localhost systemd[1]: Starting Setup Virtual Console...
Jan 21 13:04:40 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 21 13:04:40 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 21 13:04:40 localhost systemd[1]: Finished Create System Users.
Jan 21 13:04:40 localhost systemd-journald[309]: Journal started
Jan 21 13:04:40 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/7823760d016641228fb23165351e57e7) is 8.0M, max 153.6M, 145.6M free.
Jan 21 13:04:40 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 21 13:04:40 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 21 13:04:40 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 21 13:04:40 localhost systemd[1]: Started Journal Service.
Jan 21 13:04:40 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 13:04:40 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 13:04:40 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 13:04:40 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 13:04:40 localhost systemd[1]: Finished Setup Virtual Console.
Jan 21 13:04:40 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 21 13:04:40 localhost systemd[1]: Starting dracut cmdline hook...
Jan 21 13:04:40 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Jan 21 13:04:40 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 21 13:04:40 localhost systemd[1]: Finished dracut cmdline hook.
Jan 21 13:04:40 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 21 13:04:40 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 21 13:04:40 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 21 13:04:40 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 21 13:04:40 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 21 13:04:40 localhost kernel: RPC: Registered udp transport module.
Jan 21 13:04:40 localhost kernel: RPC: Registered tcp transport module.
Jan 21 13:04:40 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 21 13:04:40 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 21 13:04:40 localhost rpc.statd[445]: Version 2.5.4 starting
Jan 21 13:04:40 localhost rpc.statd[445]: Initializing NSM state
Jan 21 13:04:40 localhost rpc.idmapd[450]: Setting log level to 0
Jan 21 13:04:40 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 21 13:04:40 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 13:04:40 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 13:04:40 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 13:04:40 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 21 13:04:40 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 21 13:04:40 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 21 13:04:40 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 21 13:04:41 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 21 13:04:41 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 21 13:04:41 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 13:04:41 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 21 13:04:41 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 13:04:41 localhost systemd[1]: Reached target Network.
Jan 21 13:04:41 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 21 13:04:41 localhost systemd[1]: Starting dracut initqueue hook...
Jan 21 13:04:41 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 21 13:04:41 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 21 13:04:41 localhost kernel:  vda: vda1
Jan 21 13:04:41 localhost kernel: libata version 3.00 loaded.
Jan 21 13:04:41 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 21 13:04:41 localhost kernel: scsi host0: ata_piix
Jan 21 13:04:41 localhost kernel: scsi host1: ata_piix
Jan 21 13:04:41 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 21 13:04:41 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 21 13:04:41 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 13:04:41 localhost systemd[1]: Reached target Initrd Root Device.
Jan 21 13:04:41 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 21 13:04:41 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 21 13:04:41 localhost systemd[1]: Reached target System Initialization.
Jan 21 13:04:41 localhost systemd[1]: Reached target Basic System.
Jan 21 13:04:41 localhost kernel: ata1: found unknown device (class 0)
Jan 21 13:04:41 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 21 13:04:41 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 21 13:04:41 localhost systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:04:41 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 21 13:04:41 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 21 13:04:41 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 21 13:04:41 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 21 13:04:41 localhost systemd[1]: Finished dracut initqueue hook.
Jan 21 13:04:41 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 13:04:41 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 21 13:04:41 localhost systemd[1]: Reached target Remote File Systems.
Jan 21 13:04:41 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 21 13:04:41 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 21 13:04:41 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 21 13:04:41 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 21 13:04:41 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 21 13:04:41 localhost systemd[1]: Mounting /sysroot...
Jan 21 13:04:41 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 21 13:04:41 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 21 13:04:42 localhost kernel: XFS (vda1): Ending clean mount
Jan 21 13:04:42 localhost systemd[1]: Mounted /sysroot.
Jan 21 13:04:42 localhost systemd[1]: Reached target Initrd Root File System.
Jan 21 13:04:42 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 21 13:04:42 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 21 13:04:42 localhost systemd[1]: Reached target Initrd File Systems.
Jan 21 13:04:42 localhost systemd[1]: Reached target Initrd Default Target.
Jan 21 13:04:42 localhost systemd[1]: Starting dracut mount hook...
Jan 21 13:04:42 localhost systemd[1]: Finished dracut mount hook.
Jan 21 13:04:42 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 21 13:04:42 localhost rpc.idmapd[450]: exiting on signal 15
Jan 21 13:04:42 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 21 13:04:42 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 21 13:04:42 localhost systemd[1]: Stopped target Network.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Timer Units.
Jan 21 13:04:42 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 21 13:04:42 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Basic System.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Path Units.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Remote File Systems.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Slice Units.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Socket Units.
Jan 21 13:04:42 localhost systemd[1]: Stopped target System Initialization.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Local File Systems.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Swaps.
Jan 21 13:04:42 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut mount hook.
Jan 21 13:04:42 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 21 13:04:42 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 21 13:04:42 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 21 13:04:42 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 21 13:04:42 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 21 13:04:42 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 21 13:04:42 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 21 13:04:42 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 21 13:04:42 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 21 13:04:42 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 21 13:04:42 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 21 13:04:42 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 21 13:04:42 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Closed udev Control Socket.
Jan 21 13:04:42 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Closed udev Kernel Socket.
Jan 21 13:04:42 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 21 13:04:42 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 21 13:04:42 localhost systemd[1]: Starting Cleanup udev Database...
Jan 21 13:04:42 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 21 13:04:42 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 21 13:04:42 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Stopped Create System Users.
Jan 21 13:04:42 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 21 13:04:42 localhost systemd[1]: Finished Cleanup udev Database.
Jan 21 13:04:42 localhost systemd[1]: Reached target Switch Root.
Jan 21 13:04:42 localhost systemd[1]: Starting Switch Root...
Jan 21 13:04:42 localhost systemd[1]: Switching root.
Jan 21 13:04:42 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Jan 21 13:04:42 localhost systemd-journald[309]: Journal stopped
Jan 21 13:04:43 localhost kernel: audit: type=1404 audit(1769000682.793:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability open_perms=1
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:04:43 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:04:43 localhost kernel: audit: type=1403 audit(1769000682.915:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 21 13:04:43 localhost systemd[1]: Successfully loaded SELinux policy in 124.794ms.
Jan 21 13:04:43 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.935ms.
Jan 21 13:04:43 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 21 13:04:43 localhost systemd[1]: Detected virtualization kvm.
Jan 21 13:04:43 localhost systemd[1]: Detected architecture x86-64.
Jan 21 13:04:43 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:04:43 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Stopped Switch Root.
Jan 21 13:04:43 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 21 13:04:43 localhost systemd[1]: Created slice Slice /system/getty.
Jan 21 13:04:43 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 21 13:04:43 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 21 13:04:43 localhost systemd[1]: Created slice User and Session Slice.
Jan 21 13:04:43 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 21 13:04:43 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 21 13:04:43 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 21 13:04:43 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 21 13:04:43 localhost systemd[1]: Stopped target Switch Root.
Jan 21 13:04:43 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 21 13:04:43 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 21 13:04:43 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 21 13:04:43 localhost systemd[1]: Reached target Path Units.
Jan 21 13:04:43 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 21 13:04:43 localhost systemd[1]: Reached target Slice Units.
Jan 21 13:04:43 localhost systemd[1]: Reached target Swaps.
Jan 21 13:04:43 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 21 13:04:43 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 21 13:04:43 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 21 13:04:43 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 21 13:04:43 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 21 13:04:43 localhost systemd[1]: Listening on udev Control Socket.
Jan 21 13:04:43 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 21 13:04:43 localhost systemd[1]: Mounting Huge Pages File System...
Jan 21 13:04:43 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 21 13:04:43 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 21 13:04:43 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 21 13:04:43 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 13:04:43 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 21 13:04:43 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 21 13:04:43 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 21 13:04:43 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 21 13:04:43 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 21 13:04:43 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 21 13:04:43 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 21 13:04:43 localhost systemd[1]: Stopped Journal Service.
Jan 21 13:04:43 localhost systemd[1]: Starting Journal Service...
Jan 21 13:04:43 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 21 13:04:43 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 21 13:04:43 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 13:04:43 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 21 13:04:43 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 21 13:04:43 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 21 13:04:43 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 21 13:04:43 localhost kernel: fuse: init (API version 7.37)
Jan 21 13:04:43 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 21 13:04:43 localhost systemd-journald[675]: Journal started
Jan 21 13:04:43 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 13:04:43 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 21 13:04:43 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Started Journal Service.
Jan 21 13:04:43 localhost systemd[1]: Mounted Huge Pages File System.
Jan 21 13:04:43 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 21 13:04:43 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 21 13:04:43 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 21 13:04:43 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 21 13:04:43 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 21 13:04:43 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 21 13:04:43 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 21 13:04:43 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 21 13:04:43 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 21 13:04:43 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 21 13:04:43 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 21 13:04:43 localhost kernel: ACPI: bus type drm_connector registered
Jan 21 13:04:43 localhost systemd[1]: Mounting FUSE Control File System...
Jan 21 13:04:43 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 13:04:43 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 21 13:04:43 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 21 13:04:43 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 21 13:04:43 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 21 13:04:43 localhost systemd[1]: Starting Create System Users...
Jan 21 13:04:43 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 21 13:04:43 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 21 13:04:43 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 21 13:04:43 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 21 13:04:43 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 21 13:04:43 localhost systemd[1]: Mounted FUSE Control File System.
Jan 21 13:04:43 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 21 13:04:43 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 21 13:04:43 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 21 13:04:43 localhost systemd[1]: Finished Create System Users.
Jan 21 13:04:43 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 21 13:04:43 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 21 13:04:43 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 21 13:04:43 localhost systemd[1]: Reached target Local File Systems.
Jan 21 13:04:43 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 21 13:04:43 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 21 13:04:43 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 21 13:04:43 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 21 13:04:43 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 21 13:04:43 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 21 13:04:43 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 21 13:04:43 localhost bootctl[692]: Couldn't find EFI system partition, skipping.
Jan 21 13:04:43 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 21 13:04:43 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 21 13:04:43 localhost systemd[1]: Starting Security Auditing Service...
Jan 21 13:04:43 localhost systemd[1]: Starting RPC Bind...
Jan 21 13:04:43 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 21 13:04:43 localhost auditd[698]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 21 13:04:43 localhost auditd[698]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 21 13:04:43 localhost systemd[1]: Started RPC Bind.
Jan 21 13:04:44 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 21 13:04:44 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 21 13:04:44 localhost augenrules[703]: /sbin/augenrules: No change
Jan 21 13:04:44 localhost augenrules[718]: No rules
Jan 21 13:04:44 localhost augenrules[718]: enabled 1
Jan 21 13:04:44 localhost augenrules[718]: failure 1
Jan 21 13:04:44 localhost augenrules[718]: pid 698
Jan 21 13:04:44 localhost augenrules[718]: rate_limit 0
Jan 21 13:04:44 localhost augenrules[718]: backlog_limit 8192
Jan 21 13:04:44 localhost augenrules[718]: lost 0
Jan 21 13:04:44 localhost augenrules[718]: backlog 0
Jan 21 13:04:44 localhost augenrules[718]: backlog_wait_time 60000
Jan 21 13:04:44 localhost augenrules[718]: backlog_wait_time_actual 0
Jan 21 13:04:44 localhost augenrules[718]: enabled 1
Jan 21 13:04:44 localhost augenrules[718]: failure 1
Jan 21 13:04:44 localhost augenrules[718]: pid 698
Jan 21 13:04:44 localhost augenrules[718]: rate_limit 0
Jan 21 13:04:44 localhost augenrules[718]: backlog_limit 8192
Jan 21 13:04:44 localhost augenrules[718]: lost 0
Jan 21 13:04:44 localhost augenrules[718]: backlog 0
Jan 21 13:04:44 localhost augenrules[718]: backlog_wait_time 60000
Jan 21 13:04:44 localhost augenrules[718]: backlog_wait_time_actual 0
Jan 21 13:04:44 localhost augenrules[718]: enabled 1
Jan 21 13:04:44 localhost augenrules[718]: failure 1
Jan 21 13:04:44 localhost augenrules[718]: pid 698
Jan 21 13:04:44 localhost augenrules[718]: rate_limit 0
Jan 21 13:04:44 localhost augenrules[718]: backlog_limit 8192
Jan 21 13:04:44 localhost augenrules[718]: lost 0
Jan 21 13:04:44 localhost augenrules[718]: backlog 0
Jan 21 13:04:44 localhost augenrules[718]: backlog_wait_time 60000
Jan 21 13:04:44 localhost augenrules[718]: backlog_wait_time_actual 0
Jan 21 13:04:44 localhost systemd[1]: Started Security Auditing Service.
Jan 21 13:04:44 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 21 13:04:44 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 21 13:04:44 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 21 13:04:44 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 21 13:04:44 localhost systemd[1]: Starting Update is Completed...
Jan 21 13:04:44 localhost systemd[1]: Finished Update is Completed.
Jan 21 13:04:44 localhost systemd-udevd[726]: Using default interface naming scheme 'rhel-9.0'.
Jan 21 13:04:44 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 21 13:04:44 localhost systemd[1]: Reached target System Initialization.
Jan 21 13:04:44 localhost systemd[1]: Started dnf makecache --timer.
Jan 21 13:04:44 localhost systemd[1]: Started Daily rotation of log files.
Jan 21 13:04:44 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 21 13:04:44 localhost systemd[1]: Reached target Timer Units.
Jan 21 13:04:44 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 21 13:04:44 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 21 13:04:44 localhost systemd[1]: Reached target Socket Units.
Jan 21 13:04:44 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 21 13:04:44 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 13:04:44 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 21 13:04:44 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 21 13:04:44 localhost systemd-udevd[730]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:04:44 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 21 13:04:44 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 21 13:04:44 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 21 13:04:44 localhost systemd[1]: Reached target Basic System.
Jan 21 13:04:44 localhost dbus-broker-lau[748]: Ready
Jan 21 13:04:44 localhost systemd[1]: Starting NTP client/server...
Jan 21 13:04:44 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 21 13:04:44 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 21 13:04:44 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 21 13:04:44 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 21 13:04:44 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 21 13:04:44 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 21 13:04:44 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 21 13:04:44 localhost systemd[1]: Started irqbalance daemon.
Jan 21 13:04:44 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 21 13:04:44 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 13:04:44 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 13:04:44 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 13:04:44 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 21 13:04:44 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 21 13:04:44 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 21 13:04:44 localhost systemd[1]: Starting User Login Management...
Jan 21 13:04:44 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 21 13:04:44 localhost chronyd[787]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 13:04:44 localhost chronyd[787]: Loaded 0 symmetric keys
Jan 21 13:04:44 localhost chronyd[787]: Using right/UTC timezone to obtain leap second data
Jan 21 13:04:44 localhost chronyd[787]: Loaded seccomp filter (level 2)
Jan 21 13:04:44 localhost systemd[1]: Started NTP client/server.
Jan 21 13:04:45 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 21 13:04:45 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 21 13:04:45 localhost kernel: Console: switching to colour dummy device 80x25
Jan 21 13:04:45 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 21 13:04:45 localhost kernel: [drm] features: -context_init
Jan 21 13:04:45 localhost kernel: [drm] number of scanouts: 1
Jan 21 13:04:45 localhost kernel: [drm] number of cap sets: 0
Jan 21 13:04:45 localhost systemd-logind[780]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 13:04:45 localhost systemd-logind[780]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 13:04:45 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 21 13:04:45 localhost systemd-logind[780]: New seat seat0.
Jan 21 13:04:45 localhost systemd[1]: Started User Login Management.
Jan 21 13:04:45 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 21 13:04:45 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 21 13:04:45 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 21 13:04:45 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 21 13:04:45 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 21 13:04:45 localhost kernel: kvm_amd: TSC scaling supported
Jan 21 13:04:45 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 21 13:04:45 localhost kernel: kvm_amd: Nested Paging enabled
Jan 21 13:04:45 localhost kernel: kvm_amd: LBR virtualization supported
Jan 21 13:04:45 localhost iptables.init[774]: iptables: Applying firewall rules: [  OK  ]
Jan 21 13:04:45 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 21 13:04:45 localhost cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Wed, 21 Jan 2026 13:04:45 +0000. Up 7.47 seconds.
Jan 21 13:04:45 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 21 13:04:45 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 21 13:04:45 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpftitivn9.mount: Deactivated successfully.
Jan 21 13:04:45 localhost systemd[1]: Starting Hostname Service...
Jan 21 13:04:45 localhost systemd[1]: Started Hostname Service.
Jan 21 13:04:45 np0005590528.novalocal systemd-hostnamed[850]: Hostname set to <np0005590528.novalocal> (static)
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Reached target Preparation for Network.
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Starting Network Manager...
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9150] NetworkManager (version 1.54.3-2.el9) is starting... (boot:3db60b82-452d-4090-8c5d-4863fb6f0cf4)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9154] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9221] manager[0x55dd2cf37000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9261] hostname: hostname: using hostnamed
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9261] hostname: static hostname changed from (none) to "np0005590528.novalocal"
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9264] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9380] manager[0x55dd2cf37000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9381] manager[0x55dd2cf37000]: rfkill: WWAN hardware radio set enabled
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9413] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9413] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9413] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9414] manager: Networking is enabled by state file
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9415] settings: Loaded settings plugin: keyfile (internal)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9423] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9538] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9547] dhcp: init: Using DHCP client 'internal'
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9549] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9558] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9565] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9575] device (lo): Activation: starting connection 'lo' (cb2caf48-e7d3-4014-a1eb-1fea24d085c3)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9585] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9587] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9613] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9617] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9619] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9621] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9623] device (eth0): carrier: link connected
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9626] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9632] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9639] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9643] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9644] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9646] manager: NetworkManager state is now CONNECTING
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9648] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9653] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9657] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Started Network Manager.
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Reached target Network.
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 21 13:04:45 np0005590528.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9960] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9962] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 13:04:45 np0005590528.novalocal NetworkManager[854]: <info>  [1769000685.9967] device (lo): Activation: successful, device activated.
Jan 21 13:04:46 np0005590528.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 21 13:04:46 np0005590528.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 21 13:04:46 np0005590528.novalocal systemd[1]: Reached target NFS client services.
Jan 21 13:04:46 np0005590528.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 21 13:04:46 np0005590528.novalocal systemd[1]: Reached target Remote File Systems.
Jan 21 13:04:46 np0005590528.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5019] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5036] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5071] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5119] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5122] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5129] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5133] device (eth0): Activation: successful, device activated.
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5141] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 13:04:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000687.5147] manager: startup complete
Jan 21 13:04:47 np0005590528.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 21 13:04:47 np0005590528.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Wed, 21 Jan 2026 13:04:47 +0000. Up 9.93 seconds.
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.175         | 255.255.255.0 | global | fa:16:3e:d2:55:45 |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fed2:5545/64 |       .       |  link  | fa:16:3e:d2:55:45 |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Jan 21 13:04:47 np0005590528.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 21 13:04:50 np0005590528.novalocal useradd[985]: new group: name=cloud-user, GID=1001
Jan 21 13:04:50 np0005590528.novalocal useradd[985]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 21 13:04:50 np0005590528.novalocal useradd[985]: add 'cloud-user' to group 'adm'
Jan 21 13:04:50 np0005590528.novalocal useradd[985]: add 'cloud-user' to group 'systemd-journal'
Jan 21 13:04:50 np0005590528.novalocal useradd[985]: add 'cloud-user' to shadow group 'adm'
Jan 21 13:04:50 np0005590528.novalocal useradd[985]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Generating public/private rsa key pair.
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: The key fingerprint is:
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: SHA256:koHhN7PlwhAmz0+qHIoWsot03fjZClAcpZopG2yUXTY root@np0005590528.novalocal
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: The key's randomart image is:
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: +---[RSA 3072]----+
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |  . +E..         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |  o*+++          |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: | o .*+* .        |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |o   =B O         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |.*.=. B S        |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |++=oo oo         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |++o. + .         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |+..   o o        |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |o      +..       |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: The key fingerprint is:
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: SHA256:GgxrRJWlR+WEqu57awiRBI5JlGqTwKNTw+A7D3uNG54 root@np0005590528.novalocal
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: The key's randomart image is:
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: +---[ECDSA 256]---+
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |=*. ...oooo      |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |*==.  .o.o       |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |*=ooo ... .      |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |+++. +..         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |.=..o.o S        |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |  *.+  o         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: | . B o.          |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |  o * o          |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |   Eo+..         |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: The key fingerprint is:
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: SHA256:njsIb0PH+ekAB6UR3K8yhaujWkH5GgS/BixUqpvhO/0 root@np0005590528.novalocal
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: The key's randomart image is:
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: +--[ED25519 256]--+
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |.... .oo.        |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |oo..  .+.        |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |oo=   o. .       |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |o+ o  ... .      |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |o = . .+So       |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |.= +. =+=.       |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |o.+  = =+. .     |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: | o..o = .oo      |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: |.oo..E ..o.      |
Jan 21 13:04:52 np0005590528.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Reached target Network is Online.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting System Logging Service...
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 21 13:04:52 np0005590528.novalocal sm-notify[1001]: Version 2.5.4 starting
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting Permit User Sessions...
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 21 13:04:52 np0005590528.novalocal sshd[1003]: Server listening on 0.0.0.0 port 22.
Jan 21 13:04:52 np0005590528.novalocal sshd[1003]: Server listening on :: port 22.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Finished Permit User Sessions.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Started Command Scheduler.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Started Getty on tty1.
Jan 21 13:04:52 np0005590528.novalocal crond[1006]: (CRON) STARTUP (1.5.7)
Jan 21 13:04:52 np0005590528.novalocal crond[1006]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 21 13:04:52 np0005590528.novalocal crond[1006]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 20% if used.)
Jan 21 13:04:52 np0005590528.novalocal crond[1006]: (CRON) INFO (running with inotify support)
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Reached target Login Prompts.
Jan 21 13:04:52 np0005590528.novalocal rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Jan 21 13:04:52 np0005590528.novalocal rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Started System Logging Service.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Reached target Multi-User System.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 21 13:04:52 np0005590528.novalocal rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:04:52 np0005590528.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Jan 21 13:04:52 np0005590528.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 21 13:04:52 np0005590528.novalocal chronyd[787]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 21 13:04:52 np0005590528.novalocal chronyd[787]: System clock TAI offset set to 37 seconds
Jan 21 13:04:52 np0005590528.novalocal cloud-init[1144]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Wed, 21 Jan 2026 13:04:52 +0000. Up 14.55 seconds.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 21 13:04:52 np0005590528.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1253]: Unable to negotiate with 38.102.83.114 port 34112: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1267]: Unable to negotiate with 38.102.83.114 port 34142: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 21 13:04:52 np0005590528.novalocal dracut[1270]: dracut-057-102.git20250818.el9
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1271]: Unable to negotiate with 38.102.83.114 port 34158: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1240]: Connection closed by 38.102.83.114 port 34100 [preauth]
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1261]: Connection closed by 38.102.83.114 port 34126 [preauth]
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1292]: Unable to negotiate with 38.102.83.114 port 34188: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1295]: Unable to negotiate with 38.102.83.114 port 34190: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1281]: Connection closed by 38.102.83.114 port 34168 [preauth]
Jan 21 13:04:52 np0005590528.novalocal cloud-init[1300]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Wed, 21 Jan 2026 13:04:52 +0000. Up 14.98 seconds.
Jan 21 13:04:52 np0005590528.novalocal sshd-session[1290]: Connection closed by 38.102.83.114 port 34178 [preauth]
Jan 21 13:04:52 np0005590528.novalocal dracut[1273]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 21 13:04:52 np0005590528.novalocal cloud-init[1332]: #############################################################
Jan 21 13:04:52 np0005590528.novalocal cloud-init[1335]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 21 13:04:52 np0005590528.novalocal cloud-init[1345]: 256 SHA256:GgxrRJWlR+WEqu57awiRBI5JlGqTwKNTw+A7D3uNG54 root@np0005590528.novalocal (ECDSA)
Jan 21 13:04:53 np0005590528.novalocal cloud-init[1350]: 256 SHA256:njsIb0PH+ekAB6UR3K8yhaujWkH5GgS/BixUqpvhO/0 root@np0005590528.novalocal (ED25519)
Jan 21 13:04:53 np0005590528.novalocal cloud-init[1356]: 3072 SHA256:koHhN7PlwhAmz0+qHIoWsot03fjZClAcpZopG2yUXTY root@np0005590528.novalocal (RSA)
Jan 21 13:04:53 np0005590528.novalocal cloud-init[1359]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 21 13:04:53 np0005590528.novalocal cloud-init[1361]: #############################################################
Jan 21 13:04:53 np0005590528.novalocal cloud-init[1300]: Cloud-init v. 24.4-8.el9 finished at Wed, 21 Jan 2026 13:04:53 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 15.16 seconds
Jan 21 13:04:53 np0005590528.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 21 13:04:53 np0005590528.novalocal systemd[1]: Reached target Cloud-init target.
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 21 13:04:53 np0005590528.novalocal dracut[1273]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: memstrack is not available
Jan 21 13:04:54 np0005590528.novalocal chronyd[787]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: memstrack is not available
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 21 13:04:54 np0005590528.novalocal dracut[1273]: *** Including module: systemd ***
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: IRQ 25 affinity is now unmanaged
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: IRQ 31 affinity is now unmanaged
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: IRQ 28 affinity is now unmanaged
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: IRQ 32 affinity is now unmanaged
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: IRQ 30 affinity is now unmanaged
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 21 13:04:54 np0005590528.novalocal irqbalance[775]: IRQ 29 affinity is now unmanaged
Jan 21 13:04:55 np0005590528.novalocal dracut[1273]: *** Including module: fips ***
Jan 21 13:04:55 np0005590528.novalocal dracut[1273]: *** Including module: systemd-initrd ***
Jan 21 13:04:55 np0005590528.novalocal dracut[1273]: *** Including module: i18n ***
Jan 21 13:04:55 np0005590528.novalocal dracut[1273]: *** Including module: drm ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: prefixdevname ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: kernel-modules ***
Jan 21 13:04:56 np0005590528.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: kernel-modules-extra ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: qemu ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: fstab-sys ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: rootfs-block ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: terminfo ***
Jan 21 13:04:56 np0005590528.novalocal dracut[1273]: *** Including module: udev-rules ***
Jan 21 13:04:57 np0005590528.novalocal dracut[1273]: Skipping udev rule: 91-permissions.rules
Jan 21 13:04:57 np0005590528.novalocal dracut[1273]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 21 13:04:57 np0005590528.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 13:04:57 np0005590528.novalocal dracut[1273]: *** Including module: virtiofs ***
Jan 21 13:04:57 np0005590528.novalocal dracut[1273]: *** Including module: dracut-systemd ***
Jan 21 13:04:57 np0005590528.novalocal dracut[1273]: *** Including module: usrmount ***
Jan 21 13:04:57 np0005590528.novalocal dracut[1273]: *** Including module: base ***
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]: *** Including module: fs-lib ***
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]: *** Including module: kdumpbase ***
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:   microcode_ctl module: mangling fw_dir
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 21 13:04:58 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]: *** Including module: openssl ***
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]: *** Including module: shutdown ***
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]: *** Including module: squash ***
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]: *** Including modules done ***
Jan 21 13:04:59 np0005590528.novalocal dracut[1273]: *** Installing kernel module dependencies ***
Jan 21 13:05:00 np0005590528.novalocal dracut[1273]: *** Installing kernel module dependencies done ***
Jan 21 13:05:00 np0005590528.novalocal dracut[1273]: *** Resolving executable dependencies ***
Jan 21 13:05:01 np0005590528.novalocal dracut[1273]: *** Resolving executable dependencies done ***
Jan 21 13:05:01 np0005590528.novalocal dracut[1273]: *** Generating early-microcode cpio image ***
Jan 21 13:05:01 np0005590528.novalocal dracut[1273]: *** Store current command line parameters ***
Jan 21 13:05:01 np0005590528.novalocal dracut[1273]: Stored kernel commandline:
Jan 21 13:05:01 np0005590528.novalocal dracut[1273]: No dracut internal kernel commandline stored in the initramfs
Jan 21 13:05:02 np0005590528.novalocal dracut[1273]: *** Install squash loader ***
Jan 21 13:05:03 np0005590528.novalocal dracut[1273]: *** Squashing the files inside the initramfs ***
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: *** Squashing the files inside the initramfs done ***
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: *** Hardlinking files ***
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Mode:           real
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Files:          50
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Linked:         0 files
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Compared:       0 xattrs
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Compared:       0 files
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Saved:          0 B
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: Duration:       0.000760 seconds
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: *** Hardlinking files done ***
Jan 21 13:05:04 np0005590528.novalocal dracut[1273]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 21 13:05:05 np0005590528.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Jan 21 13:05:05 np0005590528.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Jan 21 13:05:05 np0005590528.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 21 13:05:05 np0005590528.novalocal systemd[1]: Startup finished in 1.994s (kernel) + 2.897s (initrd) + 22.533s (userspace) = 27.424s.
Jan 21 13:05:10 np0005590528.novalocal sshd-session[4300]: Accepted publickey for zuul from 38.102.83.114 port 52106 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 21 13:05:10 np0005590528.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 21 13:05:10 np0005590528.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 21 13:05:10 np0005590528.novalocal systemd-logind[780]: New session 1 of user zuul.
Jan 21 13:05:10 np0005590528.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 21 13:05:10 np0005590528.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 21 13:05:10 np0005590528.novalocal systemd[4304]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Queued start job for default target Main User Target.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Created slice User Application Slice.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Reached target Paths.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Reached target Timers.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Starting D-Bus User Message Bus Socket...
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Starting Create User's Volatile Files and Directories...
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Finished Create User's Volatile Files and Directories.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Listening on D-Bus User Message Bus Socket.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Reached target Sockets.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Reached target Basic System.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Reached target Main User Target.
Jan 21 13:05:11 np0005590528.novalocal systemd[4304]: Startup finished in 141ms.
Jan 21 13:05:11 np0005590528.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 21 13:05:11 np0005590528.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 21 13:05:11 np0005590528.novalocal sshd-session[4300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:05:11 np0005590528.novalocal python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:05:15 np0005590528.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:05:15 np0005590528.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 13:05:21 np0005590528.novalocal python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:05:22 np0005590528.novalocal python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 21 13:05:24 np0005590528.novalocal python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsdF4LHcgCitzYuwpx8IeWyXT6x4WtGnCWEH2AQsvDbI0qT1tLuhHEXMtGugk/Pi8705OFZ6oLOh1v5dxeB09R5GRYlKJqs3AWQBIzQ19/qlUi2IxjhttDTH2WNwx7Zy/ku8/ZiIuD0uwSaZW6C8vWfRviIyOt7SPr67C6i4Iu8NsM+frCvwveSxcQZqDzT+P5bGJ7dgR7l8OU08b5nG0LWZMocQAguPV9kvvxLG1pKi2R/9BnSzVlnicsOz5kUuOS8oJEWzZaXTq+0EaBsv/sfakOO0sdeQLIg5TKPIruwiSi4T4LwUHlQm3OErcRl46I5Nl8HOMS9bZksFo0TCG6mzTjHe5Y/BC/bLWMY9IKh+pxKKm5LP2oaxXZ9PQC1qQrCv1F5o6Fp/g/0uSamI5yMMF+aqQParEMZTL9BfbNSbszgl1m002zzgbrDKapw1xBnfUFax6bhhEW3GIxZoQFDIqnI0CjKHXb7o4BvmtBfT2hNwfajrfV2j9ZhFtT0Rk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:24 np0005590528.novalocal python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:24 np0005590528.novalocal python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:25 np0005590528.novalocal python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769000724.6284468-207-112255060050147/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6e3408b1e76b4c9180c1ac911338c42c_id_rsa follow=False checksum=41d9316c0b93c8992b91b1784050b7c8701818b6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:25 np0005590528.novalocal python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:26 np0005590528.novalocal python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769000725.5610757-240-203545685258382/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6e3408b1e76b4c9180c1ac911338c42c_id_rsa.pub follow=False checksum=c49a97f0c4fc2f33fec803053643adf66dae09bd backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:27 np0005590528.novalocal python3[4976]: ansible-ping Invoked with data=pong
Jan 21 13:05:28 np0005590528.novalocal python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:05:30 np0005590528.novalocal python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 21 13:05:31 np0005590528.novalocal python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:31 np0005590528.novalocal python3[5114]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:31 np0005590528.novalocal python3[5138]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:32 np0005590528.novalocal python3[5162]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:32 np0005590528.novalocal python3[5186]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:32 np0005590528.novalocal python3[5210]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:34 np0005590528.novalocal sudo[5234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslpsowwsfmhqyftffoscoadnlevymmy ; /usr/bin/python3'
Jan 21 13:05:34 np0005590528.novalocal sudo[5234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:34 np0005590528.novalocal python3[5236]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:34 np0005590528.novalocal sudo[5234]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:34 np0005590528.novalocal sudo[5312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyxequnzgrcbutrhevtflbchfbdezldj ; /usr/bin/python3'
Jan 21 13:05:34 np0005590528.novalocal sudo[5312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:34 np0005590528.novalocal python3[5314]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:34 np0005590528.novalocal sudo[5312]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:35 np0005590528.novalocal sudo[5385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hefidtsdvvrldptqqjduipgcrdwdoyyd ; /usr/bin/python3'
Jan 21 13:05:35 np0005590528.novalocal sudo[5385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:35 np0005590528.novalocal python3[5387]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769000734.4488103-21-76952579707930/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:35 np0005590528.novalocal sudo[5385]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:36 np0005590528.novalocal python3[5435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:36 np0005590528.novalocal python3[5459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:36 np0005590528.novalocal python3[5483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:36 np0005590528.novalocal python3[5507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:37 np0005590528.novalocal python3[5531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:37 np0005590528.novalocal python3[5555]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:37 np0005590528.novalocal python3[5579]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:38 np0005590528.novalocal python3[5603]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:38 np0005590528.novalocal python3[5627]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:38 np0005590528.novalocal python3[5651]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:38 np0005590528.novalocal python3[5675]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:39 np0005590528.novalocal python3[5699]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:39 np0005590528.novalocal python3[5723]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:39 np0005590528.novalocal python3[5747]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:40 np0005590528.novalocal python3[5771]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:40 np0005590528.novalocal python3[5795]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:40 np0005590528.novalocal python3[5819]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:40 np0005590528.novalocal python3[5843]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:41 np0005590528.novalocal python3[5867]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:41 np0005590528.novalocal python3[5891]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:41 np0005590528.novalocal python3[5915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:42 np0005590528.novalocal python3[5939]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:42 np0005590528.novalocal python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:42 np0005590528.novalocal python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:43 np0005590528.novalocal python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:43 np0005590528.novalocal python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:05:45 np0005590528.novalocal sudo[6059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvmyzanjwpuozvvoprumpjqxjrnbrlbn ; /usr/bin/python3'
Jan 21 13:05:45 np0005590528.novalocal sudo[6059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:45 np0005590528.novalocal python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 13:05:45 np0005590528.novalocal systemd[1]: Starting Time & Date Service...
Jan 21 13:05:45 np0005590528.novalocal systemd[1]: Started Time & Date Service.
Jan 21 13:05:45 np0005590528.novalocal systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Jan 21 13:05:45 np0005590528.novalocal sudo[6059]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:45 np0005590528.novalocal sudo[6090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aszjkaaeifiptchwdcsszcujpjwfobqe ; /usr/bin/python3'
Jan 21 13:05:45 np0005590528.novalocal sudo[6090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:46 np0005590528.novalocal python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:46 np0005590528.novalocal sudo[6090]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:46 np0005590528.novalocal python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:46 np0005590528.novalocal python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769000746.191636-153-26031791702835/source _original_basename=tmpq566vlfp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:47 np0005590528.novalocal python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:47 np0005590528.novalocal python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769000747.060639-183-28577305908758/source _original_basename=tmp8je96xv6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:48 np0005590528.novalocal sudo[6510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwtmiuukzzougvyaluyvypsjlobrzqsm ; /usr/bin/python3'
Jan 21 13:05:48 np0005590528.novalocal sudo[6510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:48 np0005590528.novalocal python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:48 np0005590528.novalocal sudo[6510]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:48 np0005590528.novalocal sudo[6583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okgvrnqdgxqtkiavjkcnyqbkhjjnuqjr ; /usr/bin/python3'
Jan 21 13:05:48 np0005590528.novalocal sudo[6583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:48 np0005590528.novalocal python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769000748.129764-231-101550851739236/source _original_basename=tmpqxekt5vr follow=False checksum=66d49d5bab4d1d03dee6fa3749e9aaa420813b05 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:48 np0005590528.novalocal sudo[6583]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:49 np0005590528.novalocal python3[6633]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:05:49 np0005590528.novalocal python3[6659]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:05:50 np0005590528.novalocal sudo[6737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtramlxqdulxbthyezdxmlxpwcfumhd ; /usr/bin/python3'
Jan 21 13:05:50 np0005590528.novalocal sudo[6737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:50 np0005590528.novalocal python3[6739]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:05:50 np0005590528.novalocal sudo[6737]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:50 np0005590528.novalocal sudo[6810]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pirowopzczilxkedznxgetakbtmwsaak ; /usr/bin/python3'
Jan 21 13:05:50 np0005590528.novalocal sudo[6810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:50 np0005590528.novalocal python3[6812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769000749.873512-273-152258448807072/source _original_basename=tmp9pfe2lww follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:05:50 np0005590528.novalocal sudo[6810]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:51 np0005590528.novalocal sudo[6861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvultbsytrdhovncgslfziselnbbywsw ; /usr/bin/python3'
Jan 21 13:05:51 np0005590528.novalocal sudo[6861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:05:51 np0005590528.novalocal python3[6863]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-9fb1-0c7f-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:05:51 np0005590528.novalocal sudo[6861]: pam_unix(sudo:session): session closed for user root
Jan 21 13:05:51 np0005590528.novalocal python3[6891]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9fb1-0c7f-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 21 13:05:52 np0005590528.novalocal python3[6920]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:06:10 np0005590528.novalocal sudo[6944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvwkyipfdkqzftczahpmffiocbhfbew ; /usr/bin/python3'
Jan 21 13:06:10 np0005590528.novalocal sudo[6944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:06:10 np0005590528.novalocal python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:06:10 np0005590528.novalocal sudo[6944]: pam_unix(sudo:session): session closed for user root
Jan 21 13:06:15 np0005590528.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 21 13:06:47 np0005590528.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 21 13:06:47 np0005590528.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.3824] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 13:06:47 np0005590528.novalocal systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4044] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4077] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4081] device (eth1): carrier: link connected
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4084] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4093] policy: auto-activating connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334)
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4098] device (eth1): Activation: starting connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334)
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4099] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4106] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4111] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:06:47 np0005590528.novalocal NetworkManager[854]: <info>  [1769000807.4116] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:06:48 np0005590528.novalocal python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-a58f-98b0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:06:58 np0005590528.novalocal sudo[7054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzpwdtmfnwfjzxcmalfxyshyhxhfvehv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 13:06:58 np0005590528.novalocal sudo[7054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:06:58 np0005590528.novalocal python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:06:58 np0005590528.novalocal sudo[7054]: pam_unix(sudo:session): session closed for user root
Jan 21 13:06:58 np0005590528.novalocal sudo[7127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmgzbbkujmabrmvrovlrihuzzrqggibo ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 13:06:58 np0005590528.novalocal sudo[7127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:06:58 np0005590528.novalocal python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769000818.1373737-102-176631469967612/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=118e1e84e3aa55e6ebd5025d71e642a64e6c9e3d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:06:58 np0005590528.novalocal sudo[7127]: pam_unix(sudo:session): session closed for user root
Jan 21 13:06:59 np0005590528.novalocal sudo[7177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmvyktvxmmlchdcjbwdfkmmbhgebzbal ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 13:06:59 np0005590528.novalocal sudo[7177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:06:59 np0005590528.novalocal python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.5897] caught SIGTERM, shutting down normally.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Stopping Network Manager...
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.5918] dhcp4 (eth0): canceled DHCP transaction
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.5919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.5919] dhcp4 (eth0): state changed no lease
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.5923] manager: NetworkManager state is now CONNECTING
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.6016] dhcp4 (eth1): canceled DHCP transaction
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.6017] dhcp4 (eth1): state changed no lease
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[854]: <info>  [1769000819.6100] exiting (success)
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Stopped Network Manager.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Starting Network Manager...
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.6756] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3db60b82-452d-4090-8c5d-4863fb6f0cf4)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.6758] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.6824] manager[0x55a6518db000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Starting Hostname Service...
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Started Hostname Service.
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7854] hostname: hostname: using hostnamed
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7856] hostname: static hostname changed from (none) to "np0005590528.novalocal"
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7865] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7872] manager[0x55a6518db000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7872] manager[0x55a6518db000]: rfkill: WWAN hardware radio set enabled
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7918] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7918] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7919] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7920] manager: Networking is enabled by state file
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7924] settings: Loaded settings plugin: keyfile (internal)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7930] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7978] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.7995] dhcp: init: Using DHCP client 'internal'
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8001] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8011] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8021] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8035] device (lo): Activation: starting connection 'lo' (cb2caf48-e7d3-4014-a1eb-1fea24d085c3)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8050] device (eth0): carrier: link connected
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8059] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8071] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8071] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8088] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8103] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8118] device (eth1): carrier: link connected
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8125] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8136] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334) (indicated)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8137] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8149] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8162] device (eth1): Activation: starting connection 'Wired connection 1' (5f608bee-bbd6-3307-abae-f2f56ef54334)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8171] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Started Network Manager.
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8189] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8201] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8206] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8209] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8213] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8218] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8221] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8226] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8249] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8253] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8261] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8264] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8279] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8282] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8287] device (lo): Activation: successful, device activated.
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8293] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8298] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 13:06:59 np0005590528.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8356] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8368] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8370] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8373] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8376] device (eth0): Activation: successful, device activated.
Jan 21 13:06:59 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000819.8380] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 13:06:59 np0005590528.novalocal sudo[7177]: pam_unix(sudo:session): session closed for user root
Jan 21 13:07:00 np0005590528.novalocal python3[7263]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-a58f-98b0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:07:09 np0005590528.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 13:07:29 np0005590528.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9000] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 13:07:44 np0005590528.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 13:07:44 np0005590528.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9372] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9376] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9393] device (eth1): Activation: successful, device activated.
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9407] manager: startup complete
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9412] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <warn>  [1769000864.9428] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9440] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9567] dhcp4 (eth1): canceled DHCP transaction
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9568] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9568] dhcp4 (eth1): state changed no lease
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9588] policy: auto-activating connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9593] device (eth1): Activation: starting connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9594] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9597] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9612] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9621] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9663] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9664] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:07:44 np0005590528.novalocal NetworkManager[7188]: <info>  [1769000864.9670] device (eth1): Activation: successful, device activated.
Jan 21 13:07:51 np0005590528.novalocal systemd[4304]: Starting Mark boot as successful...
Jan 21 13:07:51 np0005590528.novalocal systemd[4304]: Finished Mark boot as successful.
Jan 21 13:07:55 np0005590528.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 13:08:00 np0005590528.novalocal sshd-session[4313]: Received disconnect from 38.102.83.114 port 52106:11: disconnected by user
Jan 21 13:08:00 np0005590528.novalocal sshd-session[4313]: Disconnected from user zuul 38.102.83.114 port 52106
Jan 21 13:08:00 np0005590528.novalocal sshd-session[4300]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:08:00 np0005590528.novalocal systemd-logind[780]: Session 1 logged out. Waiting for processes to exit.
Jan 21 13:08:03 np0005590528.novalocal sshd-session[7292]: Accepted publickey for zuul from 38.102.83.114 port 34414 ssh2: RSA SHA256:554VC9nlbLKS9dRb6a/TnBIuiyV41v4wVIBzdCoA//M
Jan 21 13:08:03 np0005590528.novalocal systemd-logind[780]: New session 3 of user zuul.
Jan 21 13:08:03 np0005590528.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 21 13:08:03 np0005590528.novalocal sshd-session[7292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:08:03 np0005590528.novalocal sudo[7371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksfcgcsdyfzgbrwthssnpjjwpbjtwhrt ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 13:08:03 np0005590528.novalocal sudo[7371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:08:03 np0005590528.novalocal python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:08:03 np0005590528.novalocal sudo[7371]: pam_unix(sudo:session): session closed for user root
Jan 21 13:08:03 np0005590528.novalocal sudo[7444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidacmuyerbibgllkanokupqqsxplcwi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 21 13:08:03 np0005590528.novalocal sudo[7444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:08:04 np0005590528.novalocal python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769000883.3891537-267-275332765273714/source _original_basename=tmpttxj4f79 follow=False checksum=c968927fe5bb1666c25ab44199aea18189d3ab99 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:08:04 np0005590528.novalocal sudo[7444]: pam_unix(sudo:session): session closed for user root
Jan 21 13:08:06 np0005590528.novalocal sshd-session[7295]: Connection closed by 38.102.83.114 port 34414
Jan 21 13:08:06 np0005590528.novalocal sshd-session[7292]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:08:06 np0005590528.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 21 13:08:06 np0005590528.novalocal systemd-logind[780]: Session 3 logged out. Waiting for processes to exit.
Jan 21 13:08:06 np0005590528.novalocal systemd-logind[780]: Removed session 3.
Jan 21 13:10:51 np0005590528.novalocal systemd[4304]: Created slice User Background Tasks Slice.
Jan 21 13:10:51 np0005590528.novalocal systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 13:10:51 np0005590528.novalocal systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 13:15:28 np0005590528.novalocal sshd-session[7476]: Accepted publickey for zuul from 38.102.83.114 port 48374 ssh2: RSA SHA256:554VC9nlbLKS9dRb6a/TnBIuiyV41v4wVIBzdCoA//M
Jan 21 13:15:28 np0005590528.novalocal systemd-logind[780]: New session 4 of user zuul.
Jan 21 13:15:28 np0005590528.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 21 13:15:28 np0005590528.novalocal sshd-session[7476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:15:28 np0005590528.novalocal sudo[7503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwjnbxdpiycavdytzsncxeoeivieutb ; /usr/bin/python3'
Jan 21 13:15:28 np0005590528.novalocal sudo[7503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:28 np0005590528.novalocal python3[7505]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9167-ce6c-00000000216d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:15:28 np0005590528.novalocal sudo[7503]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:28 np0005590528.novalocal sudo[7532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arqwkuqdhkmhtgnlzrzeurwjhblsjeur ; /usr/bin/python3'
Jan 21 13:15:28 np0005590528.novalocal sudo[7532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:29 np0005590528.novalocal python3[7534]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:15:29 np0005590528.novalocal sudo[7532]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:29 np0005590528.novalocal sudo[7558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkpmkurqwabihqjfvbvitjvecngdrmq ; /usr/bin/python3'
Jan 21 13:15:29 np0005590528.novalocal sudo[7558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:29 np0005590528.novalocal python3[7560]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:15:29 np0005590528.novalocal sudo[7558]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:29 np0005590528.novalocal sudo[7584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljspgrxfcyexrfiqznwqpztcopmdhzem ; /usr/bin/python3'
Jan 21 13:15:29 np0005590528.novalocal sudo[7584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:29 np0005590528.novalocal python3[7586]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:15:29 np0005590528.novalocal sudo[7584]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:29 np0005590528.novalocal sudo[7610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozoencdnwvnzzcdlbsdgbrlmyrkggic ; /usr/bin/python3'
Jan 21 13:15:29 np0005590528.novalocal sudo[7610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:29 np0005590528.novalocal python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:15:29 np0005590528.novalocal sudo[7610]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:30 np0005590528.novalocal sudo[7636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwkftokgxoelymldbbctqvdazaazuelg ; /usr/bin/python3'
Jan 21 13:15:30 np0005590528.novalocal sudo[7636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:30 np0005590528.novalocal python3[7638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:15:30 np0005590528.novalocal sudo[7636]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:30 np0005590528.novalocal sudo[7714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnovjijmovdmnejghkesakpdektesjs ; /usr/bin/python3'
Jan 21 13:15:30 np0005590528.novalocal sudo[7714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:30 np0005590528.novalocal python3[7716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:15:30 np0005590528.novalocal sudo[7714]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:30 np0005590528.novalocal sudo[7787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebqnlshtqapnsuvousovsutcmazffasi ; /usr/bin/python3'
Jan 21 13:15:30 np0005590528.novalocal sudo[7787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:31 np0005590528.novalocal python3[7789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769001330.4980493-498-80855646714557/source _original_basename=tmp1krssaov follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:15:31 np0005590528.novalocal sudo[7787]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:31 np0005590528.novalocal sudo[7837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhttsepvnsjzqwcwgnmdtishthbehbk ; /usr/bin/python3'
Jan 21 13:15:31 np0005590528.novalocal sudo[7837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:31 np0005590528.novalocal python3[7839]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 13:15:31 np0005590528.novalocal systemd[1]: Reloading.
Jan 21 13:15:31 np0005590528.novalocal systemd-rc-local-generator[7861]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:15:32 np0005590528.novalocal sudo[7837]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:33 np0005590528.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqiccqiownnotxraawwsjjvqttzrxleb ; /usr/bin/python3'
Jan 21 13:15:33 np0005590528.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:33 np0005590528.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 21 13:15:33 np0005590528.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:33 np0005590528.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcthgubddwmubvocppmsdpxlabszvrvx ; /usr/bin/python3'
Jan 21 13:15:33 np0005590528.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:34 np0005590528.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:15:34 np0005590528.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:34 np0005590528.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggqufhvlfbskbnzabapzcnvagbgsqmx ; /usr/bin/python3'
Jan 21 13:15:34 np0005590528.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:34 np0005590528.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:15:34 np0005590528.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:34 np0005590528.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zophlpzaifveibcdormqkroxfyhrhjme ; /usr/bin/python3'
Jan 21 13:15:34 np0005590528.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:34 np0005590528.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:15:34 np0005590528.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:34 np0005590528.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smybzjazyhbouisgiqvzasmxndfropeh ; /usr/bin/python3'
Jan 21 13:15:34 np0005590528.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:34 np0005590528.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:15:34 np0005590528.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Jan 21 13:15:35 np0005590528.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-9167-ce6c-000000002174-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:15:35 np0005590528.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:15:37 np0005590528.novalocal sshd-session[7479]: Connection closed by 38.102.83.114 port 48374
Jan 21 13:15:37 np0005590528.novalocal sshd-session[7476]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:15:37 np0005590528.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 21 13:15:37 np0005590528.novalocal systemd[1]: session-4.scope: Consumed 4.105s CPU time.
Jan 21 13:15:37 np0005590528.novalocal systemd-logind[780]: Session 4 logged out. Waiting for processes to exit.
Jan 21 13:15:37 np0005590528.novalocal systemd-logind[780]: Removed session 4.
Jan 21 13:15:39 np0005590528.novalocal sshd-session[8070]: Accepted publickey for zuul from 38.102.83.114 port 53548 ssh2: RSA SHA256:554VC9nlbLKS9dRb6a/TnBIuiyV41v4wVIBzdCoA//M
Jan 21 13:15:39 np0005590528.novalocal systemd-logind[780]: New session 5 of user zuul.
Jan 21 13:15:39 np0005590528.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 21 13:15:39 np0005590528.novalocal sshd-session[8070]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:15:39 np0005590528.novalocal sudo[8097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzucwpipfvcqqvpdzxqlbezwwmnxcvcs ; /usr/bin/python3'
Jan 21 13:15:39 np0005590528.novalocal sudo[8097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:15:39 np0005590528.novalocal python3[8099]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 13:15:44 np0005590528.novalocal irqbalance[775]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 21 13:15:44 np0005590528.novalocal irqbalance[775]: IRQ 27 affinity is now unmanaged
Jan 21 13:15:45 np0005590528.novalocal setsebool[8139]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 21 13:15:45 np0005590528.novalocal setsebool[8139]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:15:57 np0005590528.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:16:12 np0005590528.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:16:13 np0005590528.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:16:34 np0005590528.novalocal dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 13:16:34 np0005590528.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:16:34 np0005590528.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:16:34 np0005590528.novalocal systemd[1]: Reloading.
Jan 21 13:16:34 np0005590528.novalocal systemd-rc-local-generator[8912]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:16:34 np0005590528.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:16:36 np0005590528.novalocal sudo[8097]: pam_unix(sudo:session): session closed for user root
Jan 21 13:16:37 np0005590528.novalocal python3[10877]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-9b47-3828-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:16:38 np0005590528.novalocal kernel: evm: overlay not supported
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: Starting D-Bus User Message Bus...
Jan 21 13:16:38 np0005590528.novalocal dbus-broker-launch[11762]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 21 13:16:38 np0005590528.novalocal dbus-broker-launch[11762]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: Started D-Bus User Message Bus.
Jan 21 13:16:38 np0005590528.novalocal dbus-broker-lau[11762]: Ready
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: Created slice Slice /user.
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: podman-11653.scope: unit configures an IP firewall, but not running as root.
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: Started podman-11653.scope.
Jan 21 13:16:38 np0005590528.novalocal systemd[4304]: Started podman-pause-e8ee7d97.scope.
Jan 21 13:16:41 np0005590528.novalocal sudo[13920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnyonpcpytncrmfcipwvwrxrdykiwahu ; /usr/bin/python3'
Jan 21 13:16:41 np0005590528.novalocal sudo[13920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:16:41 np0005590528.novalocal python3[13922]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.83:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.83:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:16:41 np0005590528.novalocal python3[13922]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 21 13:16:41 np0005590528.novalocal sudo[13920]: pam_unix(sudo:session): session closed for user root
Jan 21 13:16:42 np0005590528.novalocal sshd-session[8073]: Connection closed by 38.102.83.114 port 53548
Jan 21 13:16:42 np0005590528.novalocal sshd-session[8070]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:16:42 np0005590528.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 21 13:16:42 np0005590528.novalocal systemd[1]: session-5.scope: Consumed 45.482s CPU time.
Jan 21 13:16:42 np0005590528.novalocal systemd-logind[780]: Session 5 logged out. Waiting for processes to exit.
Jan 21 13:16:42 np0005590528.novalocal systemd-logind[780]: Removed session 5.
Jan 21 13:17:00 np0005590528.novalocal sshd-session[20253]: Connection closed by 38.102.83.129 port 34966 [preauth]
Jan 21 13:17:00 np0005590528.novalocal sshd-session[20255]: Unable to negotiate with 38.102.83.129 port 34998: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 21 13:17:00 np0005590528.novalocal sshd-session[20259]: Connection closed by 38.102.83.129 port 34976 [preauth]
Jan 21 13:17:00 np0005590528.novalocal sshd-session[20256]: Unable to negotiate with 38.102.83.129 port 34984: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 21 13:17:00 np0005590528.novalocal sshd-session[20257]: Unable to negotiate with 38.102.83.129 port 35014: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 21 13:17:04 np0005590528.novalocal sshd-session[21664]: Accepted publickey for zuul from 38.102.83.114 port 40186 ssh2: RSA SHA256:554VC9nlbLKS9dRb6a/TnBIuiyV41v4wVIBzdCoA//M
Jan 21 13:17:04 np0005590528.novalocal systemd-logind[780]: New session 6 of user zuul.
Jan 21 13:17:04 np0005590528.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 21 13:17:04 np0005590528.novalocal sshd-session[21664]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:17:04 np0005590528.novalocal python3[21768]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuUYMxZig0i8kJCYUhWki8ZvkWICB3zdbibeZ1b2gDaGocnHuZKkH+w5kWILwxK5bpN69Tt67l9rn12M2mvskk= zuul@np0005590527.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:17:04 np0005590528.novalocal sudo[21931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdyhxvvnabkhqjgmjfekvygmnlqttpls ; /usr/bin/python3'
Jan 21 13:17:04 np0005590528.novalocal sudo[21931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:17:04 np0005590528.novalocal python3[21941]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuUYMxZig0i8kJCYUhWki8ZvkWICB3zdbibeZ1b2gDaGocnHuZKkH+w5kWILwxK5bpN69Tt67l9rn12M2mvskk= zuul@np0005590527.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:17:04 np0005590528.novalocal sudo[21931]: pam_unix(sudo:session): session closed for user root
Jan 21 13:17:05 np0005590528.novalocal sudo[22254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pavvdzcokpzzrkyaemrkfsityeoetgiw ; /usr/bin/python3'
Jan 21 13:17:05 np0005590528.novalocal sudo[22254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:17:05 np0005590528.novalocal python3[22263]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005590528.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 21 13:17:05 np0005590528.novalocal useradd[22325]: new group: name=cloud-admin, GID=1002
Jan 21 13:17:05 np0005590528.novalocal useradd[22325]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 21 13:17:05 np0005590528.novalocal sudo[22254]: pam_unix(sudo:session): session closed for user root
Jan 21 13:17:06 np0005590528.novalocal sudo[22459]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myduaxvqvejkzidpybjcfitupvgypukm ; /usr/bin/python3'
Jan 21 13:17:06 np0005590528.novalocal sudo[22459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:17:06 np0005590528.novalocal python3[22469]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNuUYMxZig0i8kJCYUhWki8ZvkWICB3zdbibeZ1b2gDaGocnHuZKkH+w5kWILwxK5bpN69Tt67l9rn12M2mvskk= zuul@np0005590527.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 21 13:17:06 np0005590528.novalocal sudo[22459]: pam_unix(sudo:session): session closed for user root
Jan 21 13:17:06 np0005590528.novalocal sudo[22711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwdfhvjpadhkrrrabkcwembvucdqicsa ; /usr/bin/python3'
Jan 21 13:17:06 np0005590528.novalocal sudo[22711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:17:06 np0005590528.novalocal python3[22720]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:17:06 np0005590528.novalocal sudo[22711]: pam_unix(sudo:session): session closed for user root
Jan 21 13:17:06 np0005590528.novalocal sudo[22941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlsnzmtjbtdrhzmuilesdvxkwviejse ; /usr/bin/python3'
Jan 21 13:17:06 np0005590528.novalocal sudo[22941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:17:07 np0005590528.novalocal python3[22949]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769001426.417607-135-153270531149106/source _original_basename=tmp5hvqkc5e follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:17:07 np0005590528.novalocal sudo[22941]: pam_unix(sudo:session): session closed for user root
Jan 21 13:17:07 np0005590528.novalocal sudo[23240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnifhhnirhdqoenqoqotihggquhklhzk ; /usr/bin/python3'
Jan 21 13:17:07 np0005590528.novalocal sudo[23240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:17:07 np0005590528.novalocal python3[23247]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 21 13:17:07 np0005590528.novalocal systemd[1]: Starting Hostname Service...
Jan 21 13:17:07 np0005590528.novalocal systemd[1]: Started Hostname Service.
Jan 21 13:17:08 np0005590528.novalocal systemd-hostnamed[23342]: Changed pretty hostname to 'compute-0'
Jan 21 13:17:08 compute-0 systemd-hostnamed[23342]: Hostname set to <compute-0> (static)
Jan 21 13:17:08 compute-0 NetworkManager[7188]: <info>  [1769001428.0308] hostname: static hostname changed from "np0005590528.novalocal" to "compute-0"
Jan 21 13:17:08 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 13:17:08 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 13:17:08 compute-0 sudo[23240]: pam_unix(sudo:session): session closed for user root
Jan 21 13:17:08 compute-0 sshd-session[21713]: Connection closed by 38.102.83.114 port 40186
Jan 21 13:17:08 compute-0 sshd-session[21664]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:17:08 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 21 13:17:08 compute-0 systemd[1]: session-6.scope: Consumed 2.289s CPU time.
Jan 21 13:17:08 compute-0 systemd-logind[780]: Session 6 logged out. Waiting for processes to exit.
Jan 21 13:17:08 compute-0 systemd-logind[780]: Removed session 6.
Jan 21 13:17:18 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 13:17:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:17:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:17:29 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 4.355s CPU time.
Jan 21 13:17:29 compute-0 systemd[1]: run-re46ce0be925a49cdab30d0fecf9b0a10.service: Deactivated successfully.
Jan 21 13:17:38 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 13:19:41 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 21 13:19:41 compute-0 systemd[1]: Starting dnf makecache...
Jan 21 13:19:41 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 21 13:19:41 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 21 13:19:41 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 21 13:19:41 compute-0 dnf[29918]: Failed determining last makecache time.
Jan 21 13:19:41 compute-0 dnf[29918]: CentOS Stream 9 - BaseOS                         57 kB/s | 6.7 kB     00:00
Jan 21 13:19:41 compute-0 dnf[29918]: CentOS Stream 9 - AppStream                      62 kB/s | 6.8 kB     00:00
Jan 21 13:19:42 compute-0 dnf[29918]: CentOS Stream 9 - CRB                            51 kB/s | 6.6 kB     00:00
Jan 21 13:19:42 compute-0 dnf[29918]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 21 13:19:42 compute-0 dnf[29918]: Metadata cache created.
Jan 21 13:19:42 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 21 13:19:42 compute-0 systemd[1]: Finished dnf makecache.
Jan 21 13:21:30 compute-0 sshd-session[29925]: Accepted publickey for zuul from 38.102.83.129 port 44422 ssh2: RSA SHA256:554VC9nlbLKS9dRb6a/TnBIuiyV41v4wVIBzdCoA//M
Jan 21 13:21:30 compute-0 systemd-logind[780]: New session 7 of user zuul.
Jan 21 13:21:30 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 21 13:21:30 compute-0 sshd-session[29925]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:21:30 compute-0 python3[30001]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:21:32 compute-0 sudo[30115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqidvzkjcpadwmbnnlxqhlmijpdjngvc ; /usr/bin/python3'
Jan 21 13:21:32 compute-0 sudo[30115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:32 compute-0 python3[30117]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:32 compute-0 sudo[30115]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:32 compute-0 sudo[30188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kstruqhewyxrdofyflnmzhybjeibasms ; /usr/bin/python3'
Jan 21 13:21:32 compute-0 sudo[30188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:32 compute-0 python3[30190]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:32 compute-0 sudo[30188]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:33 compute-0 sudo[30214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yufhldwtyhffbpzcuygfzckrlkihsswd ; /usr/bin/python3'
Jan 21 13:21:33 compute-0 sudo[30214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:33 compute-0 python3[30216]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:33 compute-0 sudo[30214]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:33 compute-0 sudo[30287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshnkzdaactnswneaicxufkxrsvgwmuh ; /usr/bin/python3'
Jan 21 13:21:33 compute-0 sudo[30287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:33 compute-0 python3[30289]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:33 compute-0 sudo[30287]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:33 compute-0 sudo[30313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oezxsfrobrzelaeawetzcxyaliuodnne ; /usr/bin/python3'
Jan 21 13:21:33 compute-0 sudo[30313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:34 compute-0 python3[30315]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:34 compute-0 sudo[30313]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:34 compute-0 sudo[30386]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbliehbmansiiiqlughwdfyjqzfnijyg ; /usr/bin/python3'
Jan 21 13:21:34 compute-0 sudo[30386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:34 compute-0 python3[30388]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:34 compute-0 sudo[30386]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:34 compute-0 sudo[30412]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzslphhocqoigaufctztqexgvfbakkbd ; /usr/bin/python3'
Jan 21 13:21:34 compute-0 sudo[30412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:34 compute-0 python3[30414]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:34 compute-0 sudo[30412]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:35 compute-0 sudo[30485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slzpattlfumplsrxcbohoozdrwfohlhy ; /usr/bin/python3'
Jan 21 13:21:35 compute-0 sudo[30485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:35 compute-0 python3[30487]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:35 compute-0 sudo[30485]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:35 compute-0 sudo[30511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpcuiysavykdyghmgswixtgofyuynuch ; /usr/bin/python3'
Jan 21 13:21:35 compute-0 sudo[30511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:35 compute-0 python3[30513]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:35 compute-0 sudo[30511]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:35 compute-0 sudo[30584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usytprltmdhswlxrbjkqwgztsfxywwsg ; /usr/bin/python3'
Jan 21 13:21:35 compute-0 sudo[30584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:35 compute-0 python3[30586]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:35 compute-0 sudo[30584]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:36 compute-0 sudo[30610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbehzbzusjmovefwpqrxlyegsomsrjdj ; /usr/bin/python3'
Jan 21 13:21:36 compute-0 sudo[30610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:36 compute-0 python3[30612]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:36 compute-0 sudo[30610]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:36 compute-0 sudo[30683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtgyfomtujtkdlcdflhszltyhlgioyz ; /usr/bin/python3'
Jan 21 13:21:36 compute-0 sudo[30683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:36 compute-0 python3[30685]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:36 compute-0 sudo[30683]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:36 compute-0 sudo[30709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofejcbtrzovudozdsikeezdteukayshm ; /usr/bin/python3'
Jan 21 13:21:36 compute-0 sudo[30709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:36 compute-0 python3[30711]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:21:36 compute-0 sudo[30709]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:37 compute-0 sudo[30782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafmahdkhyzuokjgbropiivityqztozz ; /usr/bin/python3'
Jan 21 13:21:37 compute-0 sudo[30782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:21:37 compute-0 python3[30784]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769001692.102231-33630-79912547566970/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:21:37 compute-0 sudo[30782]: pam_unix(sudo:session): session closed for user root
Jan 21 13:21:39 compute-0 sshd-session[30809]: Connection closed by 192.168.122.11 port 59874 [preauth]
Jan 21 13:21:39 compute-0 sshd-session[30810]: Connection closed by 192.168.122.11 port 59890 [preauth]
Jan 21 13:21:39 compute-0 sshd-session[30811]: Unable to negotiate with 192.168.122.11 port 59904: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 21 13:21:39 compute-0 sshd-session[30812]: Unable to negotiate with 192.168.122.11 port 59906: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 21 13:21:39 compute-0 sshd-session[30813]: Unable to negotiate with 192.168.122.11 port 59920: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 21 13:21:49 compute-0 python3[30842]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:26:49 compute-0 sshd-session[29928]: Received disconnect from 38.102.83.129 port 44422:11: disconnected by user
Jan 21 13:26:49 compute-0 sshd-session[29928]: Disconnected from user zuul 38.102.83.129 port 44422
Jan 21 13:26:49 compute-0 sshd-session[29925]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:26:49 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 21 13:26:49 compute-0 systemd[1]: session-7.scope: Consumed 5.524s CPU time.
Jan 21 13:26:49 compute-0 systemd-logind[780]: Session 7 logged out. Waiting for processes to exit.
Jan 21 13:26:49 compute-0 systemd-logind[780]: Removed session 7.
Jan 21 13:35:01 compute-0 sshd-session[30850]: Accepted publickey for zuul from 192.168.122.30 port 46326 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:35:01 compute-0 systemd-logind[780]: New session 8 of user zuul.
Jan 21 13:35:01 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 21 13:35:01 compute-0 sshd-session[30850]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:35:02 compute-0 python3.9[31003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:35:03 compute-0 sudo[31182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xktuzzjxcbnhcoxddmsoqcbbyvwrczdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002502.761944-27-66965612344597/AnsiballZ_command.py'
Jan 21 13:35:03 compute-0 sudo[31182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:03 compute-0 python3.9[31184]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:35:10 compute-0 sudo[31182]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:10 compute-0 sshd-session[30853]: Connection closed by 192.168.122.30 port 46326
Jan 21 13:35:10 compute-0 sshd-session[30850]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:35:10 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 21 13:35:10 compute-0 systemd[1]: session-8.scope: Consumed 7.804s CPU time.
Jan 21 13:35:10 compute-0 systemd-logind[780]: Session 8 logged out. Waiting for processes to exit.
Jan 21 13:35:10 compute-0 systemd-logind[780]: Removed session 8.
Jan 21 13:35:26 compute-0 sshd-session[31241]: Accepted publickey for zuul from 192.168.122.30 port 52070 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:35:26 compute-0 systemd-logind[780]: New session 9 of user zuul.
Jan 21 13:35:26 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 21 13:35:26 compute-0 sshd-session[31241]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:35:26 compute-0 python3.9[31394]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 13:35:28 compute-0 python3.9[31568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:35:28 compute-0 sudo[31718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbyujfcdcbhlduthbffldrhdcqzrasvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002528.2690294-40-76759180435136/AnsiballZ_command.py'
Jan 21 13:35:28 compute-0 sudo[31718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:28 compute-0 python3.9[31720]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:35:28 compute-0 sudo[31718]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:29 compute-0 sudo[31871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpwaecllplptcnmalhungttanpcnxlrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002529.2873704-52-74667091171272/AnsiballZ_stat.py'
Jan 21 13:35:29 compute-0 sudo[31871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:29 compute-0 python3.9[31873]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:35:29 compute-0 sudo[31871]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:30 compute-0 sudo[32023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbrprnrwmqmskmbyynpqgfgvrirztzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002530.159766-60-124412645953997/AnsiballZ_file.py'
Jan 21 13:35:30 compute-0 sudo[32023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:30 compute-0 python3.9[32025]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:35:30 compute-0 sudo[32023]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:31 compute-0 sudo[32175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjqrzievihfrrvmhzsknpmycyrkmjrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002531.079989-68-191045931377035/AnsiballZ_stat.py'
Jan 21 13:35:31 compute-0 sudo[32175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:31 compute-0 python3.9[32177]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:35:31 compute-0 sudo[32175]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:32 compute-0 sudo[32298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqeuywwbnzyodepsgksusalswpsgtti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002531.079989-68-191045931377035/AnsiballZ_copy.py'
Jan 21 13:35:32 compute-0 sudo[32298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:32 compute-0 python3.9[32300]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002531.079989-68-191045931377035/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:35:32 compute-0 sudo[32298]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:32 compute-0 sudo[32450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvyuelmrhquuxkmmynmnfkntngzqotkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002532.518687-83-21784739885410/AnsiballZ_setup.py'
Jan 21 13:35:32 compute-0 sudo[32450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:33 compute-0 python3.9[32452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:35:33 compute-0 sudo[32450]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:33 compute-0 sudo[32606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diuwimpwrdfconiarlfmesfeavpsreez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002533.5123062-91-233921499805192/AnsiballZ_file.py'
Jan 21 13:35:33 compute-0 sudo[32606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:34 compute-0 python3.9[32608]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:35:34 compute-0 sudo[32606]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:34 compute-0 sudo[32758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsjmwaryutlgqopijzpctmqwlcwuusbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002534.2649443-100-69092107626379/AnsiballZ_file.py'
Jan 21 13:35:34 compute-0 sudo[32758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:34 compute-0 python3.9[32760]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:35:34 compute-0 sudo[32758]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:35 compute-0 python3.9[32910]: ansible-ansible.builtin.service_facts Invoked
Jan 21 13:35:40 compute-0 python3.9[33163]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:35:41 compute-0 python3.9[33313]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:35:42 compute-0 python3.9[33467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:35:43 compute-0 sudo[33623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvuuaxrzkzbrqnbdqdsxakrgguudcaex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002543.007359-148-68172638050520/AnsiballZ_setup.py'
Jan 21 13:35:43 compute-0 sudo[33623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:43 compute-0 python3.9[33625]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:35:43 compute-0 sudo[33623]: pam_unix(sudo:session): session closed for user root
Jan 21 13:35:44 compute-0 sudo[33707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrzcnvikswqnwczwyfrftofntaqqpmso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002543.007359-148-68172638050520/AnsiballZ_dnf.py'
Jan 21 13:35:44 compute-0 sudo[33707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:35:44 compute-0 python3.9[33709]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:36:48 compute-0 systemd[1]: Reloading.
Jan 21 13:36:48 compute-0 systemd-rc-local-generator[33907]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:36:49 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 21 13:36:49 compute-0 systemd[1]: Reloading.
Jan 21 13:36:50 compute-0 systemd-rc-local-generator[33944]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:36:50 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 21 13:36:50 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 21 13:36:50 compute-0 systemd[1]: Reloading.
Jan 21 13:36:50 compute-0 systemd-rc-local-generator[33986]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:36:50 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 21 13:36:50 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:36:50 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:36:50 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:38:07 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:38:07 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:38:07 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 21 13:38:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:38:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:38:07 compute-0 systemd[1]: Reloading.
Jan 21 13:38:07 compute-0 systemd-rc-local-generator[34321]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:38:07 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:38:08 compute-0 sudo[33707]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:09 compute-0 sudo[35187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppvaooyjqfnifmshhhdjwwpzittbvleo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002688.8154616-160-251295907744976/AnsiballZ_command.py'
Jan 21 13:38:09 compute-0 sudo[35187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:09 compute-0 python3.9[35212]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:38:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:38:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:38:09 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.312s CPU time.
Jan 21 13:38:09 compute-0 systemd[1]: run-r3c9de995db8a4c159455629d668cc5c6.service: Deactivated successfully.
Jan 21 13:38:10 compute-0 sudo[35187]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:11 compute-0 sudo[35511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkssvfpgpknvqqcxyxfhwytllicoltri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002690.3364606-168-161056963937198/AnsiballZ_selinux.py'
Jan 21 13:38:11 compute-0 sudo[35511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:11 compute-0 python3.9[35513]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 13:38:11 compute-0 sudo[35511]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:12 compute-0 sudo[35663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huawkfvnlupxckngulsqqtvnblzegdgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002691.654198-179-44166588522939/AnsiballZ_command.py'
Jan 21 13:38:12 compute-0 sudo[35663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:12 compute-0 python3.9[35665]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 13:38:14 compute-0 sudo[35663]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:14 compute-0 sudo[35817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jelykaiydbvzhqbcoixzwkozwgpzdswm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002694.509282-187-146768013747562/AnsiballZ_file.py'
Jan 21 13:38:14 compute-0 sudo[35817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:18 compute-0 python3.9[35819]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:38:18 compute-0 sudo[35817]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:18 compute-0 sudo[35969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiwzevzxlptlzjrykwydsqjufapikyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002698.3415082-195-81065818485382/AnsiballZ_mount.py'
Jan 21 13:38:18 compute-0 sudo[35969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:19 compute-0 python3.9[35971]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 13:38:19 compute-0 sudo[35969]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:20 compute-0 sudo[36121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgkzzdlabkfuhrkhbhbfbxzjyqacsdmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002699.7525563-223-24746687024219/AnsiballZ_file.py'
Jan 21 13:38:20 compute-0 sudo[36121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:22 compute-0 python3.9[36123]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:38:22 compute-0 sudo[36121]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:22 compute-0 sudo[36273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mobmwazwgjlewvjvhcezxxxdgcewnbrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002702.4851713-231-4128037627440/AnsiballZ_stat.py'
Jan 21 13:38:22 compute-0 sudo[36273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:23 compute-0 python3.9[36275]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:38:23 compute-0 sudo[36273]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:23 compute-0 sudo[36396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huqjuewggrmvpkjrpsnpffijovedqjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002702.4851713-231-4128037627440/AnsiballZ_copy.py'
Jan 21 13:38:23 compute-0 sudo[36396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:23 compute-0 python3.9[36398]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002702.4851713-231-4128037627440/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:38:23 compute-0 sudo[36396]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:24 compute-0 sudo[36548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtahsqcummxneqtfyyzlwuyzfjvhvqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002704.080355-255-126075670162219/AnsiballZ_stat.py'
Jan 21 13:38:24 compute-0 sudo[36548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:24 compute-0 python3.9[36550]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:38:24 compute-0 sudo[36548]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:24 compute-0 irqbalance[775]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 21 13:38:24 compute-0 irqbalance[775]: IRQ 26 affinity is now unmanaged
Jan 21 13:38:25 compute-0 sudo[36700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxepimlsxctoqwsayrqtdvevollxsqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002704.809059-263-110967294972410/AnsiballZ_command.py'
Jan 21 13:38:25 compute-0 sudo[36700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:25 compute-0 python3.9[36702]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:38:25 compute-0 sudo[36700]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:26 compute-0 sudo[36853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnstjpfijygfhynmgeqgdcvpuqwbupwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002705.5659242-271-170683077948623/AnsiballZ_file.py'
Jan 21 13:38:26 compute-0 sudo[36853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:26 compute-0 python3.9[36855]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:38:26 compute-0 sudo[36853]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:27 compute-0 sudo[37005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnwcervaldpmzicdymkgohtkvrxnuwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002706.5796974-282-73281889554340/AnsiballZ_getent.py'
Jan 21 13:38:27 compute-0 sudo[37005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:27 compute-0 python3.9[37007]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 13:38:27 compute-0 sudo[37005]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:27 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:38:27 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:38:27 compute-0 sudo[37159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yirlmvoxrnvhkbhuhpwuenoiwfxflrfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002707.4238474-290-156588364151532/AnsiballZ_group.py'
Jan 21 13:38:27 compute-0 sudo[37159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:28 compute-0 python3.9[37161]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 13:38:28 compute-0 groupadd[37162]: group added to /etc/group: name=qemu, GID=107
Jan 21 13:38:28 compute-0 groupadd[37162]: group added to /etc/gshadow: name=qemu
Jan 21 13:38:28 compute-0 groupadd[37162]: new group: name=qemu, GID=107
Jan 21 13:38:28 compute-0 sudo[37159]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:28 compute-0 sudo[37317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzhhpycydtrpmgajwjshxclebssgocqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002708.3572094-298-52317368612252/AnsiballZ_user.py'
Jan 21 13:38:28 compute-0 sudo[37317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:29 compute-0 python3.9[37319]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 13:38:29 compute-0 useradd[37321]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 21 13:38:29 compute-0 sudo[37317]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:29 compute-0 sudo[37477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxqiyokljcetouiqzycgnlbzdjtoftff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002709.3505282-306-273094032256944/AnsiballZ_getent.py'
Jan 21 13:38:29 compute-0 sudo[37477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:29 compute-0 python3.9[37479]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 13:38:29 compute-0 sudo[37477]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:30 compute-0 sudo[37630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydwymezplxecilinztizwjhdqxtfzho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002710.0289974-314-144767614892527/AnsiballZ_group.py'
Jan 21 13:38:30 compute-0 sudo[37630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:30 compute-0 python3.9[37632]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 13:38:30 compute-0 groupadd[37633]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 21 13:38:30 compute-0 groupadd[37633]: group added to /etc/gshadow: name=hugetlbfs
Jan 21 13:38:30 compute-0 groupadd[37633]: new group: name=hugetlbfs, GID=42477
Jan 21 13:38:30 compute-0 sudo[37630]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:31 compute-0 sudo[37788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrhhnlcckmrstsnrqyfhaumcosnwzrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002710.7893116-323-233589214477147/AnsiballZ_file.py'
Jan 21 13:38:31 compute-0 sudo[37788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:31 compute-0 python3.9[37790]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 13:38:31 compute-0 sudo[37788]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:31 compute-0 sudo[37940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylyreonjxyjxxzbeannghibrirwwwgux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002711.6295578-334-101294444905977/AnsiballZ_dnf.py'
Jan 21 13:38:31 compute-0 sudo[37940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:32 compute-0 python3.9[37942]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:38:35 compute-0 sudo[37940]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:36 compute-0 sudo[38093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iukrardbfswzkjfulairnknmdcwuabvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002716.073934-342-5781679184583/AnsiballZ_file.py'
Jan 21 13:38:36 compute-0 sudo[38093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:36 compute-0 python3.9[38095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:38:36 compute-0 sudo[38093]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:37 compute-0 sudo[38245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxogrpgkspbeieqlkqfxgzbxjjepjak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002716.7933543-350-129654173522440/AnsiballZ_stat.py'
Jan 21 13:38:37 compute-0 sudo[38245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:37 compute-0 python3.9[38247]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:38:37 compute-0 sudo[38245]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:37 compute-0 sudo[38368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsviinhzedsjgcvnimvstnrwlivrjwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002716.7933543-350-129654173522440/AnsiballZ_copy.py'
Jan 21 13:38:37 compute-0 sudo[38368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:37 compute-0 python3.9[38370]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002716.7933543-350-129654173522440/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:38:37 compute-0 sudo[38368]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:38 compute-0 sudo[38520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpofgtszcbzenixdgsccdyyicixzhis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002718.082831-365-240887037749295/AnsiballZ_systemd.py'
Jan 21 13:38:38 compute-0 sudo[38520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:38 compute-0 python3.9[38522]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:38:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 21 13:38:39 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 21 13:38:39 compute-0 kernel: Bridge firewalling registered
Jan 21 13:38:39 compute-0 systemd-modules-load[38526]: Inserted module 'br_netfilter'
Jan 21 13:38:39 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 21 13:38:39 compute-0 sudo[38520]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:39 compute-0 sudo[38680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvnsxhgmkaqyvbutakzlcfzeaelwykar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002719.3101804-373-140991060332786/AnsiballZ_stat.py'
Jan 21 13:38:39 compute-0 sudo[38680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:39 compute-0 python3.9[38682]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:38:39 compute-0 sudo[38680]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:40 compute-0 sudo[38803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxxrzgewkvgmcizntupvuqxxdwyneth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002719.3101804-373-140991060332786/AnsiballZ_copy.py'
Jan 21 13:38:40 compute-0 sudo[38803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:40 compute-0 python3.9[38805]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002719.3101804-373-140991060332786/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:38:40 compute-0 sudo[38803]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:41 compute-0 sudo[38955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxldbxgpxtijbasgtngscpivvvplepiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002720.79158-391-127187662354266/AnsiballZ_dnf.py'
Jan 21 13:38:41 compute-0 sudo[38955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:41 compute-0 python3.9[38957]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:38:45 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:38:45 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:38:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:38:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:38:45 compute-0 systemd[1]: Reloading.
Jan 21 13:38:46 compute-0 systemd-rc-local-generator[39020]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:38:46 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:38:46 compute-0 sudo[38955]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:47 compute-0 python3.9[40204]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:38:48 compute-0 python3.9[41141]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 13:38:48 compute-0 python3.9[41814]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:38:49 compute-0 sudo[42701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjfabhhtpcdggydzvgctjwlllyuaaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002729.2829545-430-230196823999145/AnsiballZ_command.py'
Jan 21 13:38:49 compute-0 sudo[42701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:49 compute-0 python3.9[42723]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:38:49 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 13:38:50 compute-0 systemd[1]: Starting Authorization Manager...
Jan 21 13:38:50 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 13:38:50 compute-0 polkitd[43343]: Started polkitd version 0.117
Jan 21 13:38:50 compute-0 polkitd[43343]: Loading rules from directory /etc/polkit-1/rules.d
Jan 21 13:38:50 compute-0 polkitd[43343]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 21 13:38:50 compute-0 polkitd[43343]: Finished loading, compiling and executing 2 rules
Jan 21 13:38:50 compute-0 polkitd[43343]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 21 13:38:50 compute-0 systemd[1]: Started Authorization Manager.
Jan 21 13:38:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:38:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:38:50 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.542s CPU time.
Jan 21 13:38:50 compute-0 systemd[1]: run-rb06f943ba12041849ebaf1ffd88daa0d.service: Deactivated successfully.
Jan 21 13:38:50 compute-0 sudo[42701]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:51 compute-0 sudo[43512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpplwbeoqtmcaqzrqunincmtescygodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002730.753754-439-3241166818060/AnsiballZ_systemd.py'
Jan 21 13:38:51 compute-0 sudo[43512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:51 compute-0 python3.9[43514]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:38:51 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 13:38:51 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 13:38:51 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 13:38:51 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 13:38:51 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 13:38:51 compute-0 sudo[43512]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:52 compute-0 python3.9[43675]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 13:38:54 compute-0 sudo[43825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpyujmujclnhssvpplomfwyxdakprsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002734.0082984-496-74747499170654/AnsiballZ_systemd.py'
Jan 21 13:38:54 compute-0 sudo[43825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:54 compute-0 python3.9[43827]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:38:54 compute-0 systemd[1]: Reloading.
Jan 21 13:38:54 compute-0 systemd-rc-local-generator[43855]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:38:54 compute-0 sudo[43825]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:55 compute-0 sudo[44014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhargewmdbjeuuusxsosqwgtkcldzqre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002735.129599-496-163129333334810/AnsiballZ_systemd.py'
Jan 21 13:38:55 compute-0 sudo[44014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:55 compute-0 python3.9[44016]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:38:55 compute-0 systemd[1]: Reloading.
Jan 21 13:38:55 compute-0 systemd-rc-local-generator[44044]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:38:56 compute-0 sudo[44014]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:56 compute-0 sudo[44203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxwjrmbjstotdtfyfhdecpcccxdncsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002736.3530393-512-57890158519986/AnsiballZ_command.py'
Jan 21 13:38:56 compute-0 sudo[44203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:56 compute-0 python3.9[44205]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:38:57 compute-0 sudo[44203]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:57 compute-0 sudo[44356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiahzxsbfqbemkvookkjrlrmqikgdmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002737.6460052-520-144484542291448/AnsiballZ_command.py'
Jan 21 13:38:57 compute-0 sudo[44356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:58 compute-0 python3.9[44358]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:38:58 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 21 13:38:58 compute-0 sudo[44356]: pam_unix(sudo:session): session closed for user root
Jan 21 13:38:58 compute-0 sudo[44509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbolacneqhitbktvwndqahyvyemncvdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002738.5454779-528-38510740465486/AnsiballZ_command.py'
Jan 21 13:38:58 compute-0 sudo[44509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:38:59 compute-0 python3.9[44511]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:39:00 compute-0 sudo[44509]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:01 compute-0 sudo[44672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvufypvtzjstyesuopczyqouhpxpvuqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002740.7328181-536-263409466862901/AnsiballZ_command.py'
Jan 21 13:39:01 compute-0 sudo[44672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:01 compute-0 python3.9[44674]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:39:01 compute-0 sudo[44672]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:01 compute-0 sudo[44825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfnancxmzstjeydqqlcgvppebbpgrrla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002741.4557025-544-98997367396776/AnsiballZ_systemd.py'
Jan 21 13:39:01 compute-0 sudo[44825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:02 compute-0 python3.9[44827]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:39:02 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 21 13:39:02 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 21 13:39:02 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 21 13:39:02 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 21 13:39:02 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 21 13:39:02 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 21 13:39:02 compute-0 sudo[44825]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:02 compute-0 sshd-session[31244]: Connection closed by 192.168.122.30 port 52070
Jan 21 13:39:02 compute-0 sshd-session[31241]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:39:02 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 21 13:39:02 compute-0 systemd[1]: session-9.scope: Consumed 2min 26.203s CPU time.
Jan 21 13:39:02 compute-0 systemd-logind[780]: Session 9 logged out. Waiting for processes to exit.
Jan 21 13:39:02 compute-0 systemd-logind[780]: Removed session 9.
Jan 21 13:39:08 compute-0 sshd-session[44857]: Accepted publickey for zuul from 192.168.122.30 port 44826 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:39:08 compute-0 systemd-logind[780]: New session 10 of user zuul.
Jan 21 13:39:08 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 21 13:39:08 compute-0 sshd-session[44857]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:39:09 compute-0 python3.9[45010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:39:10 compute-0 sudo[45164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dacfuijtwqmcfbgrutsemhuqwtrcukgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002749.7220347-31-29187744317428/AnsiballZ_getent.py'
Jan 21 13:39:10 compute-0 sudo[45164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:10 compute-0 python3.9[45166]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 13:39:10 compute-0 sudo[45164]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:10 compute-0 sudo[45317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgbpsnmyzaxxbnqozndobcqaxezbofai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002750.479648-39-194543873486986/AnsiballZ_group.py'
Jan 21 13:39:10 compute-0 sudo[45317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:11 compute-0 python3.9[45319]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 13:39:11 compute-0 groupadd[45320]: group added to /etc/group: name=openvswitch, GID=42476
Jan 21 13:39:11 compute-0 groupadd[45320]: group added to /etc/gshadow: name=openvswitch
Jan 21 13:39:11 compute-0 groupadd[45320]: new group: name=openvswitch, GID=42476
Jan 21 13:39:11 compute-0 sudo[45317]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:11 compute-0 sudo[45475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seewfizcmthmrhcqowcjucamrrfaqahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002751.3780856-47-96819785839700/AnsiballZ_user.py'
Jan 21 13:39:11 compute-0 sudo[45475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:12 compute-0 python3.9[45477]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 13:39:12 compute-0 useradd[45479]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 21 13:39:12 compute-0 useradd[45479]: add 'openvswitch' to group 'hugetlbfs'
Jan 21 13:39:12 compute-0 useradd[45479]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 21 13:39:12 compute-0 sudo[45475]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:13 compute-0 sudo[45635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvoygzzmrtlncjtqpymnlktbpmlrnnyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002752.6677217-57-276429617043158/AnsiballZ_setup.py'
Jan 21 13:39:13 compute-0 sudo[45635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:13 compute-0 python3.9[45637]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:39:13 compute-0 sudo[45635]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:13 compute-0 sudo[45719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dieuyvyanfqznicugfcqdvrgujnczmij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002752.6677217-57-276429617043158/AnsiballZ_dnf.py'
Jan 21 13:39:13 compute-0 sudo[45719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:14 compute-0 python3.9[45721]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 13:39:17 compute-0 sudo[45719]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:18 compute-0 sudo[45882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsrsgqigywznioavocujkmoryupcajkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002757.826348-71-23550145014101/AnsiballZ_dnf.py'
Jan 21 13:39:18 compute-0 sudo[45882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:18 compute-0 python3.9[45884]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:39:30 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:39:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:39:30 compute-0 groupadd[45907]: group added to /etc/group: name=unbound, GID=994
Jan 21 13:39:30 compute-0 groupadd[45907]: group added to /etc/gshadow: name=unbound
Jan 21 13:39:30 compute-0 groupadd[45907]: new group: name=unbound, GID=994
Jan 21 13:39:30 compute-0 useradd[45914]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 21 13:39:30 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 21 13:39:30 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 21 13:39:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:39:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:39:32 compute-0 systemd[1]: Reloading.
Jan 21 13:39:32 compute-0 systemd-rc-local-generator[46413]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:39:32 compute-0 systemd-sysv-generator[46417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:39:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:39:33 compute-0 sudo[45882]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:39:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:39:33 compute-0 systemd[1]: run-r076e6202699647f4950455b338534f78.service: Deactivated successfully.
Jan 21 13:39:34 compute-0 sudo[46981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wivdalmchklvrqhvyuoqurpayhxxychm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002773.7689369-79-89297896823339/AnsiballZ_systemd.py'
Jan 21 13:39:34 compute-0 sudo[46981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:34 compute-0 python3.9[46983]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:39:34 compute-0 systemd[1]: Reloading.
Jan 21 13:39:34 compute-0 systemd-rc-local-generator[47014]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:39:34 compute-0 systemd-sysv-generator[47018]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:39:35 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 21 13:39:35 compute-0 chown[47025]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 21 13:39:35 compute-0 ovs-ctl[47030]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 21 13:39:35 compute-0 ovs-ctl[47030]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 21 13:39:35 compute-0 ovs-ctl[47030]: Starting ovsdb-server [  OK  ]
Jan 21 13:39:35 compute-0 ovs-vsctl[47079]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 21 13:39:35 compute-0 ovs-vsctl[47099]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"3ade990a-d6f9-4724-a58c-009e4fc34364\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 21 13:39:35 compute-0 ovs-ctl[47030]: Configuring Open vSwitch system IDs [  OK  ]
Jan 21 13:39:35 compute-0 ovs-ctl[47030]: Enabling remote OVSDB managers [  OK  ]
Jan 21 13:39:35 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 21 13:39:35 compute-0 ovs-vsctl[47105]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 13:39:35 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 21 13:39:35 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 21 13:39:35 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 21 13:39:35 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 21 13:39:35 compute-0 ovs-ctl[47149]: Inserting openvswitch module [  OK  ]
Jan 21 13:39:35 compute-0 ovs-ctl[47118]: Starting ovs-vswitchd [  OK  ]
Jan 21 13:39:35 compute-0 ovs-vsctl[47166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 21 13:39:35 compute-0 ovs-ctl[47118]: Enabling remote OVSDB managers [  OK  ]
Jan 21 13:39:35 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 21 13:39:35 compute-0 systemd[1]: Starting Open vSwitch...
Jan 21 13:39:35 compute-0 systemd[1]: Finished Open vSwitch.
Jan 21 13:39:35 compute-0 sudo[46981]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:36 compute-0 python3.9[47318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:39:37 compute-0 sudo[47468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwbwhgooofggwiioidijdulgaoewgodq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002777.065947-97-241320556409061/AnsiballZ_sefcontext.py'
Jan 21 13:39:37 compute-0 sudo[47468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:37 compute-0 python3.9[47470]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 13:39:39 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:39:39 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:39:39 compute-0 sudo[47468]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:40 compute-0 python3.9[47625]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:39:41 compute-0 sudo[47781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmyqyobyqbzzczvbedpxbiqzbnbqszyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002780.8942633-115-136397094951087/AnsiballZ_dnf.py'
Jan 21 13:39:41 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 21 13:39:41 compute-0 sudo[47781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:41 compute-0 python3.9[47783]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:39:42 compute-0 sudo[47781]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:43 compute-0 sudo[47934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rroflpwkqfthpnvqqnbycdygggwzximx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002782.9069295-123-192828706072918/AnsiballZ_command.py'
Jan 21 13:39:43 compute-0 sudo[47934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:43 compute-0 python3.9[47936]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:39:44 compute-0 sudo[47934]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:45 compute-0 sudo[48221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bckpoyhtwlnnrswvvkavabuepkgccpxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002784.5825326-131-54700265219496/AnsiballZ_file.py'
Jan 21 13:39:45 compute-0 sudo[48221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:45 compute-0 python3.9[48223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 13:39:45 compute-0 sudo[48221]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:46 compute-0 python3.9[48373]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:39:46 compute-0 sudo[48525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwyfacqhfqlwjbqmmovjjjjevurtlfjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002786.2221732-147-255282385424538/AnsiballZ_dnf.py'
Jan 21 13:39:46 compute-0 sudo[48525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:46 compute-0 python3.9[48527]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:39:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:39:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:39:48 compute-0 systemd[1]: Reloading.
Jan 21 13:39:48 compute-0 systemd-rc-local-generator[48566]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:39:48 compute-0 systemd-sysv-generator[48569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:39:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:39:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:39:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:39:49 compute-0 systemd[1]: run-r2c5a4c05df974a29a01dc217cd218032.service: Deactivated successfully.
Jan 21 13:39:49 compute-0 sudo[48525]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:49 compute-0 sudo[48841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumvurczgmdvsyfljvsxlacvaotkglbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002789.5051224-155-159751644561256/AnsiballZ_systemd.py'
Jan 21 13:39:49 compute-0 sudo[48841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:50 compute-0 python3.9[48843]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:39:50 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 21 13:39:50 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 21 13:39:50 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 21 13:39:50 compute-0 NetworkManager[7188]: <info>  [1769002790.1420] caught SIGTERM, shutting down normally.
Jan 21 13:39:50 compute-0 systemd[1]: Stopping Network Manager...
Jan 21 13:39:50 compute-0 NetworkManager[7188]: <info>  [1769002790.1439] dhcp4 (eth0): canceled DHCP transaction
Jan 21 13:39:50 compute-0 NetworkManager[7188]: <info>  [1769002790.1439] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:39:50 compute-0 NetworkManager[7188]: <info>  [1769002790.1439] dhcp4 (eth0): state changed no lease
Jan 21 13:39:50 compute-0 NetworkManager[7188]: <info>  [1769002790.1443] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 13:39:50 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 13:39:50 compute-0 NetworkManager[7188]: <info>  [1769002790.5891] exiting (success)
Jan 21 13:39:50 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 13:39:50 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 21 13:39:50 compute-0 systemd[1]: Stopped Network Manager.
Jan 21 13:39:50 compute-0 systemd[1]: NetworkManager.service: Consumed 12.661s CPU time, 4.1M memory peak, read 0B from disk, written 26.5K to disk.
Jan 21 13:39:50 compute-0 systemd[1]: Starting Network Manager...
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.6936] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3db60b82-452d-4090-8c5d-4863fb6f0cf4)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.6937] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.7010] manager[0x561373e6a000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 21 13:39:50 compute-0 systemd[1]: Starting Hostname Service...
Jan 21 13:39:50 compute-0 systemd[1]: Started Hostname Service.
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.7979] hostname: hostname: using hostnamed
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.7979] hostname: static hostname changed from (none) to "compute-0"
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.7989] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.7996] manager[0x561373e6a000]: rfkill: Wi-Fi hardware radio set enabled
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.7997] manager[0x561373e6a000]: rfkill: WWAN hardware radio set enabled
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8033] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8050] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8051] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8052] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8053] manager: Networking is enabled by state file
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8057] settings: Loaded settings plugin: keyfile (internal)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8063] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8107] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8124] dhcp: init: Using DHCP client 'internal'
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8129] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8137] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8144] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8156] device (lo): Activation: starting connection 'lo' (cb2caf48-e7d3-4014-a1eb-1fea24d085c3)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8166] device (eth0): carrier: link connected
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8173] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8183] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8183] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8193] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8204] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8216] device (eth1): carrier: link connected
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8223] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8231] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa) (indicated)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8232] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8240] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8250] device (eth1): Activation: starting connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 13:39:50 compute-0 systemd[1]: Started Network Manager.
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8260] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8273] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8276] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8278] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8280] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8283] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8284] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8287] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8290] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8296] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8299] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8309] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8322] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8336] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8343] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8432] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8439] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8440] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8441] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 21 13:39:50 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8445] device (lo): Activation: successful, device activated.
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8452] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8454] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8457] device (eth1): Activation: successful, device activated.
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8465] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8466] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8469] manager: NetworkManager state is now CONNECTED_SITE
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8471] device (eth0): Activation: successful, device activated.
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8476] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 21 13:39:50 compute-0 NetworkManager[48860]: <info>  [1769002790.8478] manager: startup complete
Jan 21 13:39:50 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 21 13:39:50 compute-0 sudo[48841]: pam_unix(sudo:session): session closed for user root
Jan 21 13:39:51 compute-0 sudo[49069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfnrhbbvimgkuucvpwwrhltrzjmxfied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002791.0642893-163-279599572450623/AnsiballZ_dnf.py'
Jan 21 13:39:51 compute-0 sudo[49069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:39:51 compute-0 python3.9[49071]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:40:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 13:40:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:40:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:40:02 compute-0 systemd[1]: Reloading.
Jan 21 13:40:02 compute-0 systemd-rc-local-generator[49126]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:40:02 compute-0 systemd-sysv-generator[49130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:40:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:40:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:40:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:40:03 compute-0 systemd[1]: run-rcb4219d668064803a9356f4793aff78d.service: Deactivated successfully.
Jan 21 13:40:03 compute-0 sudo[49069]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:04 compute-0 sudo[49528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsjmltbjzlnjgsqsfbsbzointpdtyafw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002804.094712-175-51147401130094/AnsiballZ_stat.py'
Jan 21 13:40:04 compute-0 sudo[49528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:04 compute-0 python3.9[49530]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:40:04 compute-0 sudo[49528]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:05 compute-0 sudo[49680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekkvcquytjrtehzfbjdhcspjtubeeioy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002804.8263736-184-237243133432468/AnsiballZ_ini_file.py'
Jan 21 13:40:05 compute-0 sudo[49680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:05 compute-0 python3.9[49682]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:05 compute-0 sudo[49680]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:06 compute-0 sudo[49834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrftwzucdocifstdxscflsjtmvdhjdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002805.7840228-194-76205221612227/AnsiballZ_ini_file.py'
Jan 21 13:40:06 compute-0 sudo[49834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:06 compute-0 python3.9[49836]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:06 compute-0 sudo[49834]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:06 compute-0 sudo[49986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdrxarfwyywkvrjyxfrrcbekebimgkhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002806.4562268-194-45837936090039/AnsiballZ_ini_file.py'
Jan 21 13:40:06 compute-0 sudo[49986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:07 compute-0 python3.9[49988]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:07 compute-0 sudo[49986]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:07 compute-0 sudo[50138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssdzmqlbazobhuwxtvhdhrbjezmzpmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002807.2078652-209-255585482448402/AnsiballZ_ini_file.py'
Jan 21 13:40:07 compute-0 sudo[50138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:07 compute-0 python3.9[50140]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:07 compute-0 sudo[50138]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:08 compute-0 sudo[50290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erymvkksvwwmhjxgukmtwgfegprkzdci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002807.9534054-209-54953193925674/AnsiballZ_ini_file.py'
Jan 21 13:40:08 compute-0 sudo[50290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:08 compute-0 python3.9[50292]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:08 compute-0 sudo[50290]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:09 compute-0 sudo[50442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfkewnuehlsxnbtbivspzzzigwcaomkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002808.682205-224-164963269092930/AnsiballZ_stat.py'
Jan 21 13:40:09 compute-0 sudo[50442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:09 compute-0 python3.9[50444]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:40:09 compute-0 sudo[50442]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:09 compute-0 sudo[50565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrtolzynmqcxqoedaoaupkpvopmcstxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002808.682205-224-164963269092930/AnsiballZ_copy.py'
Jan 21 13:40:09 compute-0 sudo[50565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:09 compute-0 python3.9[50567]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002808.682205-224-164963269092930/.source _original_basename=.0j3u85f9 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:10 compute-0 sudo[50565]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:10 compute-0 sudo[50717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxgklmwtnakcdjfdemvxqlnvkyqlnrzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002810.1887238-239-268637533234711/AnsiballZ_file.py'
Jan 21 13:40:10 compute-0 sudo[50717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:10 compute-0 python3.9[50719]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:10 compute-0 sudo[50717]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:11 compute-0 sudo[50869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbzxudwgccguyzxmtilcntnnrlkjrzuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002810.9268658-247-40396373216183/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 21 13:40:11 compute-0 sudo[50869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:11 compute-0 python3.9[50871]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 21 13:40:11 compute-0 sudo[50869]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:12 compute-0 sudo[51021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekygbocewfzvpzdirvjdoxergnibtkxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002811.8270156-256-140389454500453/AnsiballZ_file.py'
Jan 21 13:40:12 compute-0 sudo[51021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:12 compute-0 python3.9[51023]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:12 compute-0 sudo[51021]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:13 compute-0 sudo[51173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvigcuvfcaeajgjkfwqrlmigpsumaoal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002812.679006-266-261248627346042/AnsiballZ_stat.py'
Jan 21 13:40:13 compute-0 sudo[51173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:13 compute-0 sudo[51173]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:13 compute-0 sudo[51296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdgpmhdxmadxhganfpiactfkgwcfhqup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002812.679006-266-261248627346042/AnsiballZ_copy.py'
Jan 21 13:40:13 compute-0 sudo[51296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:13 compute-0 sudo[51296]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:14 compute-0 sudo[51448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpefhvsqdycflsdjmdewotrkzlpofzww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002813.8980863-281-178691725598701/AnsiballZ_slurp.py'
Jan 21 13:40:14 compute-0 sudo[51448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:14 compute-0 python3.9[51450]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 21 13:40:14 compute-0 sudo[51448]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:15 compute-0 sudo[51623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqesxnfelnglijgpegxscrfzfimbkor ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002814.7956567-290-206890208182593/async_wrapper.py j629316759519 300 /home/zuul/.ansible/tmp/ansible-tmp-1769002814.7956567-290-206890208182593/AnsiballZ_edpm_os_net_config.py _'
Jan 21 13:40:15 compute-0 sudo[51623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:15 compute-0 ansible-async_wrapper.py[51625]: Invoked with j629316759519 300 /home/zuul/.ansible/tmp/ansible-tmp-1769002814.7956567-290-206890208182593/AnsiballZ_edpm_os_net_config.py _
Jan 21 13:40:15 compute-0 ansible-async_wrapper.py[51628]: Starting module and watcher
Jan 21 13:40:15 compute-0 ansible-async_wrapper.py[51628]: Start watching 51629 (300)
Jan 21 13:40:15 compute-0 ansible-async_wrapper.py[51629]: Start module (51629)
Jan 21 13:40:15 compute-0 ansible-async_wrapper.py[51625]: Return async_wrapper task started.
Jan 21 13:40:15 compute-0 sudo[51623]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:15 compute-0 python3.9[51630]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 21 13:40:16 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 21 13:40:16 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 21 13:40:16 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 21 13:40:16 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 21 13:40:16 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0016] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0033] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0562] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0564] audit: op="connection-add" uuid="7423b76c-a3ba-4491-8849-a86ec82668ec" name="br-ex-br" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0583] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0584] audit: op="connection-add" uuid="6796aa1a-41d2-4f88-9ec7-7e010f9b1349" name="br-ex-port" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0599] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0601] audit: op="connection-add" uuid="91b265ce-7b56-4dfe-bfa2-b204c0b4ee75" name="eth1-port" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0614] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0616] audit: op="connection-add" uuid="de9e755b-0037-4ef1-9938-729e7cd85839" name="vlan20-port" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0632] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0633] audit: op="connection-add" uuid="b49f9fbb-58da-49d9-a6f1-b74cf7f1d938" name="vlan21-port" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0650] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0651] audit: op="connection-add" uuid="80289967-c9f5-4863-949c-57bc75b5ef2c" name="vlan22-port" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0666] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0667] audit: op="connection-add" uuid="418ba55d-d3f0-44f7-b662-c15e6c3b4e0a" name="vlan23-port" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0692] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0712] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0714] audit: op="connection-add" uuid="c7da1a0b-8e66-4362-85ca-19fe9b133d18" name="br-ex-if" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0768] audit: op="connection-update" uuid="d7910448-f944-5d05-b69e-270d04ed29fa" name="ci-private-network" args="ovs-external-ids.data,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.routing-rules,ipv4.routes,ipv4.method,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,connection.master,connection.port-type,connection.controller,connection.slave-type,connection.timestamp,ovs-interface.type" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0788] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0791] audit: op="connection-add" uuid="9654f8b1-396e-401c-9a67-52aa262a8521" name="vlan20-if" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0810] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0812] audit: op="connection-add" uuid="07d52a65-2435-4244-b570-d5d7586237c1" name="vlan21-if" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0830] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0832] audit: op="connection-add" uuid="d25fcd17-1bdc-48fe-bd74-94bc01108651" name="vlan22-if" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0853] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0854] audit: op="connection-add" uuid="e25ac4e0-2adc-4361-a477-1395d499c897" name="vlan23-if" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0868] audit: op="connection-delete" uuid="5f608bee-bbd6-3307-abae-f2f56ef54334" name="Wired connection 1" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0882] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0884] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0893] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0898] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7423b76c-a3ba-4491-8849-a86ec82668ec)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0899] audit: op="connection-activate" uuid="7423b76c-a3ba-4491-8849-a86ec82668ec" name="br-ex-br" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0901] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0903] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0909] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0916] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6796aa1a-41d2-4f88-9ec7-7e010f9b1349)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0929] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0930] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0934] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0939] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (91b265ce-7b56-4dfe-bfa2-b204c0b4ee75)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0941] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0942] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0947] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0950] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (de9e755b-0037-4ef1-9938-729e7cd85839)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0974] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0976] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0981] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0984] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b49f9fbb-58da-49d9-a6f1-b74cf7f1d938)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0985] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0986] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0990] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0994] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (80289967-c9f5-4863-949c-57bc75b5ef2c)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.0996] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.0997] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1001] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1005] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (418ba55d-d3f0-44f7-b662-c15e6c3b4e0a)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1005] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1008] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1009] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1015] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.1016] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1018] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1022] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c7da1a0b-8e66-4362-85ca-19fe9b133d18)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1023] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1025] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1027] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1028] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1029] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1040] device (eth1): disconnecting for new activation request.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1041] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1044] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1046] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1048] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1052] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.1053] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1057] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1061] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (9654f8b1-396e-401c-9a67-52aa262a8521)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1062] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1065] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1067] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1068] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1071] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.1072] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1076] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1080] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (07d52a65-2435-4244-b570-d5d7586237c1)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1081] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1084] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1086] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1087] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1090] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.1090] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1093] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1097] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d25fcd17-1bdc-48fe-bd74-94bc01108651)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1098] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1101] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1102] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1103] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1106] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <warn>  [1769002818.1107] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1110] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1114] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (e25ac4e0-2adc-4361-a477-1395d499c897)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1114] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1117] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1119] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1120] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1121] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1134] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1136] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1139] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1140] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1146] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1150] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1154] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1157] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1159] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1164] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1168] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1170] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1172] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1177] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1183] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 systemd-udevd[51636]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1186] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1188] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1193] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 kernel: Timeout policy base is empty
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1198] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1202] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1204] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1209] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1214] dhcp4 (eth0): canceled DHCP transaction
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1214] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1214] dhcp4 (eth0): state changed no lease
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1216] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1227] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1230] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51631 uid=0 result="fail" reason="Device is not activated"
Jan 21 13:40:18 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1278] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1282] dhcp4 (eth0): state changed new lease, address=38.102.83.175
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1291] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1339] device (eth1): disconnecting for new activation request.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1340] audit: op="connection-activate" uuid="d7910448-f944-5d05-b69e-270d04ed29fa" name="ci-private-network" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1341] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1351] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 21 13:40:18 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1377] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1377] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1493] device (eth1): Activation: starting connection 'ci-private-network' (d7910448-f944-5d05-b69e-270d04ed29fa)
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1497] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1505] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1508] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1514] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1517] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 kernel: br-ex: entered promiscuous mode
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1521] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1522] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1523] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1524] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1525] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1527] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1554] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1557] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1560] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1563] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1568] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1572] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1576] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1580] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1584] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1588] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1593] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1598] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1608] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1614] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1627] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1638] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1643] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 kernel: vlan22: entered promiscuous mode
Jan 21 13:40:18 compute-0 systemd-udevd[51635]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1650] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1654] device (eth1): Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1667] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1669] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1673] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 kernel: vlan20: entered promiscuous mode
Jan 21 13:40:18 compute-0 kernel: vlan21: entered promiscuous mode
Jan 21 13:40:18 compute-0 systemd-udevd[51637]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1774] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1790] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1802] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 21 13:40:18 compute-0 kernel: vlan23: entered promiscuous mode
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1814] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1822] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1824] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1828] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1882] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1888] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1893] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1902] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1909] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1923] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1936] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1945] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1946] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1951] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1959] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1961] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 21 13:40:18 compute-0 NetworkManager[48860]: <info>  [1769002818.1964] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 21 13:40:19 compute-0 NetworkManager[48860]: <info>  [1769002819.3242] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 13:40:19 compute-0 sudo[51987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnbhnsfbkdgkrvtyyoxwrjlixtwlhkue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002818.9414115-290-251558900646125/AnsiballZ_async_status.py'
Jan 21 13:40:19 compute-0 sudo[51987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:19 compute-0 NetworkManager[48860]: <info>  [1769002819.4938] checkpoint[0x561373e40950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 21 13:40:19 compute-0 NetworkManager[48860]: <info>  [1769002819.4942] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51631 uid=0 result="success"
Jan 21 13:40:19 compute-0 python3.9[51989]: ansible-ansible.legacy.async_status Invoked with jid=j629316759519.51625 mode=status _async_dir=/root/.ansible_async
Jan 21 13:40:19 compute-0 sudo[51987]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:19 compute-0 NetworkManager[48860]: <info>  [1769002819.8783] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 13:40:19 compute-0 NetworkManager[48860]: <info>  [1769002819.8797] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 13:40:20 compute-0 NetworkManager[48860]: <info>  [1769002820.2289] audit: op="networking-control" arg="global-dns-configuration" pid=51631 uid=0 result="success"
Jan 21 13:40:20 compute-0 NetworkManager[48860]: <info>  [1769002820.2328] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 21 13:40:20 compute-0 NetworkManager[48860]: <info>  [1769002820.2773] audit: op="networking-control" arg="global-dns-configuration" pid=51631 uid=0 result="success"
Jan 21 13:40:20 compute-0 NetworkManager[48860]: <info>  [1769002820.3134] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 13:40:20 compute-0 NetworkManager[48860]: <info>  [1769002820.4920] checkpoint[0x561373e40a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 21 13:40:20 compute-0 NetworkManager[48860]: <info>  [1769002820.4926] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51631 uid=0 result="success"
Jan 21 13:40:20 compute-0 ansible-async_wrapper.py[51629]: Module complete (51629)
Jan 21 13:40:20 compute-0 ansible-async_wrapper.py[51628]: Done in kid B.
Jan 21 13:40:20 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 21 13:40:22 compute-0 sudo[52095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whofyixwfbuzoeklkwqzjicamzszveqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002818.9414115-290-251558900646125/AnsiballZ_async_status.py'
Jan 21 13:40:22 compute-0 sudo[52095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:23 compute-0 python3.9[52097]: ansible-ansible.legacy.async_status Invoked with jid=j629316759519.51625 mode=status _async_dir=/root/.ansible_async
Jan 21 13:40:23 compute-0 sudo[52095]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:23 compute-0 sudo[52195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnspvqfrbufkrtqywiypqhmfovkureys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002818.9414115-290-251558900646125/AnsiballZ_async_status.py'
Jan 21 13:40:23 compute-0 sudo[52195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:23 compute-0 python3.9[52197]: ansible-ansible.legacy.async_status Invoked with jid=j629316759519.51625 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 13:40:23 compute-0 sudo[52195]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:24 compute-0 sudo[52347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzucnnvvaghgujntoxzxjorfphzbzelu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002823.8523803-317-157814109560431/AnsiballZ_stat.py'
Jan 21 13:40:24 compute-0 sudo[52347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:24 compute-0 python3.9[52349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:40:24 compute-0 sudo[52347]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:24 compute-0 sudo[52470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-entfiqdmqdsldqvjctoiammqslfvqelt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002823.8523803-317-157814109560431/AnsiballZ_copy.py'
Jan 21 13:40:24 compute-0 sudo[52470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:25 compute-0 python3.9[52472]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002823.8523803-317-157814109560431/.source.returncode _original_basename=.auwjidur follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:25 compute-0 sudo[52470]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:25 compute-0 sudo[52622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktpqnolkcuknzrzhajpwddhtsoppviep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002825.5607853-333-145563283559828/AnsiballZ_stat.py'
Jan 21 13:40:25 compute-0 sudo[52622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:26 compute-0 python3.9[52624]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:40:26 compute-0 sudo[52622]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:26 compute-0 sudo[52746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usgppssisovuqrsakedsyvwbpzxnryzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002825.5607853-333-145563283559828/AnsiballZ_copy.py'
Jan 21 13:40:26 compute-0 sudo[52746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:26 compute-0 python3.9[52748]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002825.5607853-333-145563283559828/.source.cfg _original_basename=.26953fre follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:26 compute-0 sudo[52746]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:27 compute-0 sudo[52898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cihtlvruujlmyrqwkkgsivuuxirnnybv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002826.8794239-348-46057850857005/AnsiballZ_systemd.py'
Jan 21 13:40:27 compute-0 sudo[52898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:27 compute-0 python3.9[52900]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:40:28 compute-0 systemd[1]: Reloading Network Manager...
Jan 21 13:40:28 compute-0 NetworkManager[48860]: <info>  [1769002828.7285] audit: op="reload" arg="0" pid=52904 uid=0 result="success"
Jan 21 13:40:28 compute-0 NetworkManager[48860]: <info>  [1769002828.7293] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 21 13:40:29 compute-0 systemd[1]: Reloaded Network Manager.
Jan 21 13:40:29 compute-0 sudo[52898]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:29 compute-0 sshd-session[44860]: Connection closed by 192.168.122.30 port 44826
Jan 21 13:40:29 compute-0 sshd-session[44857]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:40:29 compute-0 systemd-logind[780]: Session 10 logged out. Waiting for processes to exit.
Jan 21 13:40:29 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 21 13:40:29 compute-0 systemd[1]: session-10.scope: Consumed 53.690s CPU time.
Jan 21 13:40:29 compute-0 systemd-logind[780]: Removed session 10.
Jan 21 13:40:35 compute-0 sshd-session[52935]: Accepted publickey for zuul from 192.168.122.30 port 55226 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:40:35 compute-0 systemd-logind[780]: New session 11 of user zuul.
Jan 21 13:40:35 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 21 13:40:35 compute-0 sshd-session[52935]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:40:36 compute-0 python3.9[53088]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:40:37 compute-0 python3.9[53243]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:40:38 compute-0 python3.9[53436]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:40:38 compute-0 sshd-session[52938]: Connection closed by 192.168.122.30 port 55226
Jan 21 13:40:38 compute-0 sshd-session[52935]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:40:38 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 21 13:40:38 compute-0 systemd[1]: session-11.scope: Consumed 2.702s CPU time.
Jan 21 13:40:38 compute-0 systemd-logind[780]: Session 11 logged out. Waiting for processes to exit.
Jan 21 13:40:38 compute-0 systemd-logind[780]: Removed session 11.
Jan 21 13:40:39 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 21 13:40:43 compute-0 sshd-session[53465]: Accepted publickey for zuul from 192.168.122.30 port 59348 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:40:43 compute-0 systemd-logind[780]: New session 12 of user zuul.
Jan 21 13:40:43 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 21 13:40:43 compute-0 sshd-session[53465]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:40:45 compute-0 python3.9[53618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:40:46 compute-0 python3.9[53773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:40:46 compute-0 sudo[53927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgdpufelchaaxqaxnkfemyuefgnyfojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002846.4223363-35-200642990939786/AnsiballZ_setup.py'
Jan 21 13:40:46 compute-0 sudo[53927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:47 compute-0 python3.9[53929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:40:47 compute-0 sudo[53927]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:47 compute-0 sudo[54011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lofvzjdqutnmafqlmugenvwwbxsctvkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002846.4223363-35-200642990939786/AnsiballZ_dnf.py'
Jan 21 13:40:47 compute-0 sudo[54011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:47 compute-0 python3.9[54013]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:40:49 compute-0 sudo[54011]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:49 compute-0 sudo[54165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okrwfsyjxujnjmzuckjexolmqvpqplmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002849.2564375-47-25795446886952/AnsiballZ_setup.py'
Jan 21 13:40:49 compute-0 sudo[54165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:49 compute-0 python3.9[54167]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:40:50 compute-0 sudo[54165]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:50 compute-0 sudo[54360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihrskhneoqlpnlfnxmynyhouqlgdobjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002850.4418812-58-191534811217754/AnsiballZ_file.py'
Jan 21 13:40:50 compute-0 sudo[54360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:51 compute-0 python3.9[54362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:51 compute-0 sudo[54360]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:51 compute-0 sudo[54512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkokvqhfznayyrmcvjgdgoullozmubnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002851.4472325-66-49842527250712/AnsiballZ_command.py'
Jan 21 13:40:51 compute-0 sudo[54512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:52 compute-0 python3.9[54514]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:40:52 compute-0 podman[54515]: 2026-01-21 13:40:52.185176148 +0000 UTC m=+0.068195663 system refresh
Jan 21 13:40:52 compute-0 sudo[54512]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:52 compute-0 sudo[54676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrclqpthocpnuswxgcsuikgabacpahh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002852.4063685-74-30197939360619/AnsiballZ_stat.py'
Jan 21 13:40:52 compute-0 sudo[54676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:53 compute-0 python3.9[54678]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:40:53 compute-0 sudo[54676]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:40:53 compute-0 sudo[54799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egoyiqaynidlqthwwaqmfjoeblnajekr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002852.4063685-74-30197939360619/AnsiballZ_copy.py'
Jan 21 13:40:53 compute-0 sudo[54799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:53 compute-0 python3.9[54801]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002852.4063685-74-30197939360619/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7d70938f2a5932e44dd49d2f5c65a90ffbde64b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:40:53 compute-0 sudo[54799]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:54 compute-0 sudo[54951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stuwpzlnivjeatfrldbatgttnhmirphf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002853.9339652-89-265267989045200/AnsiballZ_stat.py'
Jan 21 13:40:54 compute-0 sudo[54951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:54 compute-0 python3.9[54953]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:40:54 compute-0 sudo[54951]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:54 compute-0 sudo[55074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaxwzrlcdaicxljljjqcgqkuwfdtsjuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002853.9339652-89-265267989045200/AnsiballZ_copy.py'
Jan 21 13:40:54 compute-0 sudo[55074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:55 compute-0 python3.9[55076]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002853.9339652-89-265267989045200/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97513ee69a4b3dc3c4fd06acbbcaa9a991e77aee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:40:55 compute-0 sudo[55074]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:55 compute-0 sudo[55226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwqfbvkttgbeglqoflijlictwfokdgys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002855.2650347-105-210818551374856/AnsiballZ_ini_file.py'
Jan 21 13:40:55 compute-0 sudo[55226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:55 compute-0 python3.9[55228]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:40:55 compute-0 sudo[55226]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:56 compute-0 sudo[55378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idrfxdudyfxnvadbrbdhheztcscmcikv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002856.0886202-105-47924921585729/AnsiballZ_ini_file.py'
Jan 21 13:40:56 compute-0 sudo[55378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:56 compute-0 python3.9[55380]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:40:56 compute-0 sudo[55378]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:57 compute-0 sudo[55530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vywlbefotbamptxsxmwdhckqtarbypxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002856.7310572-105-183601800837107/AnsiballZ_ini_file.py'
Jan 21 13:40:57 compute-0 sudo[55530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:57 compute-0 python3.9[55532]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:40:57 compute-0 sudo[55530]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:57 compute-0 sudo[55682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acmshdvmhjtjqdxzvixclsyrdiofwsop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002857.3786488-105-232141498614638/AnsiballZ_ini_file.py'
Jan 21 13:40:57 compute-0 sudo[55682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:57 compute-0 python3.9[55684]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:40:57 compute-0 sudo[55682]: pam_unix(sudo:session): session closed for user root
Jan 21 13:40:58 compute-0 sudo[55834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewqseqheplacdkiicfvnnsocadnfswbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002858.1656253-136-56459387677986/AnsiballZ_dnf.py'
Jan 21 13:40:58 compute-0 sudo[55834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:40:58 compute-0 python3.9[55836]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:40:59 compute-0 sudo[55834]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:00 compute-0 sudo[55987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hogstaattohjgbmphehdtbuxkfwvmuqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002860.3164039-147-142646649763091/AnsiballZ_setup.py'
Jan 21 13:41:00 compute-0 sudo[55987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:00 compute-0 python3.9[55989]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:41:00 compute-0 sudo[55987]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:01 compute-0 sudo[56141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mufvuqtrrkaaxhknpgwvvcjaosrrgmel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002861.1472704-155-96498485719856/AnsiballZ_stat.py'
Jan 21 13:41:01 compute-0 sudo[56141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:01 compute-0 python3.9[56143]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:41:01 compute-0 sudo[56141]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:02 compute-0 sudo[56293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxmypnrfldsluatnfgvnhbbiuyctblcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002861.9103963-164-65323515129455/AnsiballZ_stat.py'
Jan 21 13:41:02 compute-0 sudo[56293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:02 compute-0 python3.9[56295]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:41:02 compute-0 sudo[56293]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:02 compute-0 sudo[56445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgftzhrbcdvwzaaadlwaofjvimpujulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002862.6742456-174-265610381216685/AnsiballZ_command.py'
Jan 21 13:41:02 compute-0 sudo[56445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:03 compute-0 python3.9[56447]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:41:03 compute-0 sudo[56445]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:03 compute-0 sudo[56598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlecuqtvmjrjuriscuiylauotzdupisu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002863.3794823-184-5553906129961/AnsiballZ_service_facts.py'
Jan 21 13:41:03 compute-0 sudo[56598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:04 compute-0 python3.9[56600]: ansible-service_facts Invoked
Jan 21 13:41:04 compute-0 network[56617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 13:41:04 compute-0 network[56618]: 'network-scripts' will be removed from distribution in near future.
Jan 21 13:41:04 compute-0 network[56619]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 13:41:07 compute-0 sudo[56598]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:08 compute-0 sudo[56902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjgvjkabsutiyalzrnreuctkzaczexdq ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769002868.2919672-199-82873393352755/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769002868.2919672-199-82873393352755/args'
Jan 21 13:41:08 compute-0 sudo[56902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:08 compute-0 sudo[56902]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:09 compute-0 sudo[57069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfjarwcdituvkruqrgzawgiuxqhwkcyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002869.13747-210-50845833842485/AnsiballZ_dnf.py'
Jan 21 13:41:09 compute-0 sudo[57069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:09 compute-0 python3.9[57071]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:41:11 compute-0 sudo[57069]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:12 compute-0 sudo[57222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmienksoieqmjvexojksmpsrzndtezyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002871.3513927-223-134054225698631/AnsiballZ_package_facts.py'
Jan 21 13:41:12 compute-0 sudo[57222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:12 compute-0 python3.9[57224]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 13:41:12 compute-0 sudo[57222]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:13 compute-0 sudo[57374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igetvlinykjhhhucclznzzntmdznsjnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002872.9902468-233-248980985567189/AnsiballZ_stat.py'
Jan 21 13:41:13 compute-0 sudo[57374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:13 compute-0 python3.9[57376]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:13 compute-0 sudo[57374]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:14 compute-0 sudo[57499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqkagdzigcxhdzjqcnbihtcxhrxpvjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002872.9902468-233-248980985567189/AnsiballZ_copy.py'
Jan 21 13:41:14 compute-0 sudo[57499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:14 compute-0 python3.9[57501]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002872.9902468-233-248980985567189/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:14 compute-0 sudo[57499]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:14 compute-0 sudo[57653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyhmdrlxakiwmtgikjotolwnndwtkmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002874.508318-248-196199981318750/AnsiballZ_stat.py'
Jan 21 13:41:14 compute-0 sudo[57653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:15 compute-0 python3.9[57655]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:15 compute-0 sudo[57653]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:15 compute-0 sudo[57778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpflnuvxwltiftsskzfkedzbvsajoaly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002874.508318-248-196199981318750/AnsiballZ_copy.py'
Jan 21 13:41:15 compute-0 sudo[57778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:15 compute-0 python3.9[57780]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002874.508318-248-196199981318750/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:15 compute-0 sudo[57778]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:16 compute-0 sudo[57932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhpwsnjrodopzefzfngpahrhxogbwsbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002876.210395-269-18373676244296/AnsiballZ_lineinfile.py'
Jan 21 13:41:16 compute-0 sudo[57932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:16 compute-0 python3.9[57934]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:17 compute-0 sudo[57932]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:17 compute-0 sudo[58086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npftxdyllyyxqkxmjeesezmnecpeymoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002877.5583467-284-154177431893972/AnsiballZ_setup.py'
Jan 21 13:41:17 compute-0 sudo[58086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:18 compute-0 python3.9[58088]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:41:18 compute-0 sudo[58086]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:19 compute-0 sudo[58170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhhdkbbnpyuxbusovthgwlbdwzhvtgvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002877.5583467-284-154177431893972/AnsiballZ_systemd.py'
Jan 21 13:41:19 compute-0 sudo[58170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:19 compute-0 python3.9[58172]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:41:19 compute-0 sudo[58170]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:20 compute-0 sudo[58324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdjnnogxixsrtaswpagqqathyvwkmmqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002879.963208-300-234632599965192/AnsiballZ_setup.py'
Jan 21 13:41:20 compute-0 sudo[58324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:20 compute-0 python3.9[58326]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:41:20 compute-0 sudo[58324]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:21 compute-0 sudo[58408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkehimeuhcmqychlbppecqksqtjwlafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002879.963208-300-234632599965192/AnsiballZ_systemd.py'
Jan 21 13:41:21 compute-0 sudo[58408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:21 compute-0 python3.9[58410]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:41:21 compute-0 chronyd[787]: chronyd exiting
Jan 21 13:41:21 compute-0 systemd[1]: Stopping NTP client/server...
Jan 21 13:41:21 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 21 13:41:21 compute-0 systemd[1]: Stopped NTP client/server.
Jan 21 13:41:21 compute-0 systemd[1]: Starting NTP client/server...
Jan 21 13:41:21 compute-0 chronyd[58418]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 21 13:41:21 compute-0 chronyd[58418]: Frequency -23.188 +/- 0.457 ppm read from /var/lib/chrony/drift
Jan 21 13:41:21 compute-0 chronyd[58418]: Loaded seccomp filter (level 2)
Jan 21 13:41:21 compute-0 systemd[1]: Started NTP client/server.
Jan 21 13:41:21 compute-0 sudo[58408]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:21 compute-0 sshd-session[53468]: Connection closed by 192.168.122.30 port 59348
Jan 21 13:41:21 compute-0 sshd-session[53465]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:41:21 compute-0 systemd-logind[780]: Session 12 logged out. Waiting for processes to exit.
Jan 21 13:41:21 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 21 13:41:21 compute-0 systemd[1]: session-12.scope: Consumed 28.583s CPU time.
Jan 21 13:41:21 compute-0 systemd-logind[780]: Removed session 12.
Jan 21 13:41:27 compute-0 sshd-session[58444]: Accepted publickey for zuul from 192.168.122.30 port 56532 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:41:27 compute-0 systemd-logind[780]: New session 13 of user zuul.
Jan 21 13:41:27 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 21 13:41:27 compute-0 sshd-session[58444]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:41:28 compute-0 sudo[58597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrkchshzalrmevpuhhesaylgptpyjkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002888.0151153-17-188034403521027/AnsiballZ_file.py'
Jan 21 13:41:28 compute-0 sudo[58597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:28 compute-0 python3.9[58599]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:28 compute-0 sudo[58597]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:29 compute-0 sudo[58749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmnnfooemapzjvbpmarwnpoanjjqoud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002888.9165962-29-218683674307810/AnsiballZ_stat.py'
Jan 21 13:41:29 compute-0 sudo[58749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:29 compute-0 python3.9[58751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:29 compute-0 sudo[58749]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:30 compute-0 sudo[58872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjokevyjoajndtnbfozolvetwllcohmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002888.9165962-29-218683674307810/AnsiballZ_copy.py'
Jan 21 13:41:30 compute-0 sudo[58872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:30 compute-0 python3.9[58874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002888.9165962-29-218683674307810/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:30 compute-0 sudo[58872]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:30 compute-0 sshd-session[58447]: Connection closed by 192.168.122.30 port 56532
Jan 21 13:41:30 compute-0 sshd-session[58444]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:41:30 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 21 13:41:30 compute-0 systemd[1]: session-13.scope: Consumed 1.694s CPU time.
Jan 21 13:41:30 compute-0 systemd-logind[780]: Session 13 logged out. Waiting for processes to exit.
Jan 21 13:41:30 compute-0 systemd-logind[780]: Removed session 13.
Jan 21 13:41:35 compute-0 sshd-session[58899]: Accepted publickey for zuul from 192.168.122.30 port 40664 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:41:35 compute-0 systemd-logind[780]: New session 14 of user zuul.
Jan 21 13:41:35 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 21 13:41:35 compute-0 sshd-session[58899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:41:36 compute-0 python3.9[59052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:41:37 compute-0 sudo[59206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdakqhjwwohnagvfwflokewjuhddkiiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002897.5107653-28-240166949682207/AnsiballZ_file.py'
Jan 21 13:41:37 compute-0 sudo[59206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:38 compute-0 python3.9[59208]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:38 compute-0 sudo[59206]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:38 compute-0 sudo[59381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutzrlydjkagfawjxlnwqjglhvxazpkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002898.3492048-36-259041733745612/AnsiballZ_stat.py'
Jan 21 13:41:38 compute-0 sudo[59381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:39 compute-0 python3.9[59383]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:39 compute-0 sudo[59381]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:39 compute-0 sudo[59504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrwsjjykwksqtxhzetrmiisypvojcvmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002898.3492048-36-259041733745612/AnsiballZ_copy.py'
Jan 21 13:41:39 compute-0 sudo[59504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:39 compute-0 python3.9[59506]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769002898.3492048-36-259041733745612/.source.json _original_basename=.fa46hlvz follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:39 compute-0 sudo[59504]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:40 compute-0 sudo[59656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffhwzlcxdxzubgyorlhtsnpkrcgnblkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002900.1830714-59-91009601689435/AnsiballZ_stat.py'
Jan 21 13:41:40 compute-0 sudo[59656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:40 compute-0 python3.9[59658]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:40 compute-0 sudo[59656]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:41 compute-0 sudo[59779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yugwraibvauyzphhtsisexlvcfmtzkey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002900.1830714-59-91009601689435/AnsiballZ_copy.py'
Jan 21 13:41:41 compute-0 sudo[59779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:41 compute-0 python3.9[59781]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002900.1830714-59-91009601689435/.source _original_basename=.h_o39y7u follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:41 compute-0 sudo[59779]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:41 compute-0 sudo[59931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzisoebqiumdtsnsjkdjcwkobwbsmmsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002901.3829558-75-178664255923150/AnsiballZ_file.py'
Jan 21 13:41:41 compute-0 sudo[59931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:41 compute-0 python3.9[59933]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:41:41 compute-0 sudo[59931]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:42 compute-0 sudo[60083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sucklybkyoyrsnfiqjvpxzzbhhuyvhiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002902.0245328-83-193828627161121/AnsiballZ_stat.py'
Jan 21 13:41:42 compute-0 sudo[60083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:42 compute-0 python3.9[60085]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:42 compute-0 sudo[60083]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:42 compute-0 sudo[60206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zavodhywjrnzylyfvdodlzzydeoujykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002902.0245328-83-193828627161121/AnsiballZ_copy.py'
Jan 21 13:41:42 compute-0 sudo[60206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:43 compute-0 python3.9[60208]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002902.0245328-83-193828627161121/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:41:43 compute-0 sudo[60206]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:43 compute-0 sudo[60358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaczcfizssnotwpptnkxhigkpfsjijb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002903.262337-83-107574617176738/AnsiballZ_stat.py'
Jan 21 13:41:43 compute-0 sudo[60358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:43 compute-0 python3.9[60360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:43 compute-0 sudo[60358]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:44 compute-0 sudo[60481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgukasnluuukaerzvpybozyefkhjqjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002903.262337-83-107574617176738/AnsiballZ_copy.py'
Jan 21 13:41:44 compute-0 sudo[60481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:44 compute-0 python3.9[60483]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769002903.262337-83-107574617176738/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:41:44 compute-0 sudo[60481]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:44 compute-0 sudo[60633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntakjtfvxqitjziikukbgjvjanaxhqzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002904.4915504-112-189674553410402/AnsiballZ_file.py'
Jan 21 13:41:44 compute-0 sudo[60633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:44 compute-0 python3.9[60635]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:44 compute-0 sudo[60633]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:45 compute-0 sudo[60785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efpmdeopzhrgkehutwiswihisyhidzgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002905.1376553-120-4260311095035/AnsiballZ_stat.py'
Jan 21 13:41:45 compute-0 sudo[60785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:45 compute-0 python3.9[60787]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:45 compute-0 sudo[60785]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:46 compute-0 sudo[60908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwltpdisyscqffluuyppniokjaecozko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002905.1376553-120-4260311095035/AnsiballZ_copy.py'
Jan 21 13:41:46 compute-0 sudo[60908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:46 compute-0 python3.9[60910]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002905.1376553-120-4260311095035/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:46 compute-0 sudo[60908]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:46 compute-0 sudo[61060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubgqexydcnszbbnakvzhmhwkpbddtmxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002906.4259424-135-30777830353773/AnsiballZ_stat.py'
Jan 21 13:41:46 compute-0 sudo[61060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:46 compute-0 python3.9[61062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:46 compute-0 sudo[61060]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:47 compute-0 sudo[61183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csrnfjvjgghxybyzbddgofidipwltsmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002906.4259424-135-30777830353773/AnsiballZ_copy.py'
Jan 21 13:41:47 compute-0 sudo[61183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:47 compute-0 python3.9[61185]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002906.4259424-135-30777830353773/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:47 compute-0 sudo[61183]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:48 compute-0 sudo[61335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxamonjuvvcbnspuzusovyhpfhthbvhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002907.6384277-150-215561118408440/AnsiballZ_systemd.py'
Jan 21 13:41:48 compute-0 sudo[61335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:48 compute-0 python3.9[61337]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:41:48 compute-0 systemd[1]: Reloading.
Jan 21 13:41:48 compute-0 systemd-rc-local-generator[61360]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:41:48 compute-0 systemd-sysv-generator[61364]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:41:48 compute-0 systemd[1]: Reloading.
Jan 21 13:41:48 compute-0 systemd-rc-local-generator[61400]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:41:48 compute-0 systemd-sysv-generator[61404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:41:49 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 21 13:41:49 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 21 13:41:49 compute-0 sudo[61335]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:49 compute-0 sudo[61563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdkbzlusbzzelenqjtsfoxvepjvbfzru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002909.3276942-158-196414873739088/AnsiballZ_stat.py'
Jan 21 13:41:49 compute-0 sudo[61563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:49 compute-0 python3.9[61565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:49 compute-0 sudo[61563]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:50 compute-0 sudo[61686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgbvzvdnlerihxzqvhafzdtafbiunfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002909.3276942-158-196414873739088/AnsiballZ_copy.py'
Jan 21 13:41:50 compute-0 sudo[61686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:50 compute-0 python3.9[61688]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002909.3276942-158-196414873739088/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:50 compute-0 sudo[61686]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:50 compute-0 sudo[61838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elyzzvlghlxbnpbpuhqiooxgkhyattqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002910.5342133-173-117210165307911/AnsiballZ_stat.py'
Jan 21 13:41:50 compute-0 sudo[61838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:50 compute-0 python3.9[61840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:41:50 compute-0 sudo[61838]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:51 compute-0 sudo[61961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvinewemtpkhctsqavinbgvomxgwvuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002910.5342133-173-117210165307911/AnsiballZ_copy.py'
Jan 21 13:41:51 compute-0 sudo[61961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:51 compute-0 python3.9[61963]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002910.5342133-173-117210165307911/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:41:51 compute-0 sudo[61961]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:52 compute-0 sudo[62113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcwofmmpbmdlppvujbhqqvdgwpymbadt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002911.7883866-188-207153866107914/AnsiballZ_systemd.py'
Jan 21 13:41:52 compute-0 sudo[62113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:52 compute-0 python3.9[62115]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:41:52 compute-0 systemd[1]: Reloading.
Jan 21 13:41:52 compute-0 systemd-sysv-generator[62146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:41:52 compute-0 systemd-rc-local-generator[62143]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:41:53 compute-0 systemd[1]: Reloading.
Jan 21 13:41:53 compute-0 systemd-sysv-generator[62185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:41:53 compute-0 systemd-rc-local-generator[62181]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:41:53 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 13:41:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 13:41:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 13:41:53 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 13:41:53 compute-0 sudo[62113]: pam_unix(sudo:session): session closed for user root
Jan 21 13:41:54 compute-0 python3.9[62341]: ansible-ansible.builtin.service_facts Invoked
Jan 21 13:41:55 compute-0 network[62358]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 13:41:55 compute-0 network[62359]: 'network-scripts' will be removed from distribution in near future.
Jan 21 13:41:55 compute-0 network[62360]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 13:41:59 compute-0 sudo[62620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqobgteekfgprnfbbhseilxkedktroe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002918.6823416-204-26371272244166/AnsiballZ_systemd.py'
Jan 21 13:41:59 compute-0 sudo[62620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:41:59 compute-0 python3.9[62622]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:41:59 compute-0 systemd[1]: Reloading.
Jan 21 13:41:59 compute-0 systemd-rc-local-generator[62651]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:41:59 compute-0 systemd-sysv-generator[62654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:41:59 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 21 13:41:59 compute-0 iptables.init[62661]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 21 13:41:59 compute-0 iptables.init[62661]: iptables: Flushing firewall rules: [  OK  ]
Jan 21 13:41:59 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 21 13:41:59 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 21 13:41:59 compute-0 sudo[62620]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:00 compute-0 sudo[62855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikxlvonskifgqknguzxpdvoaaayxhwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002920.1125107-204-134380774734903/AnsiballZ_systemd.py'
Jan 21 13:42:00 compute-0 sudo[62855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:00 compute-0 python3.9[62857]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:42:00 compute-0 sudo[62855]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:01 compute-0 sudo[63009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wepryipxubfdeblaqcymskecaybtraat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002921.0055642-220-66222724173155/AnsiballZ_systemd.py'
Jan 21 13:42:01 compute-0 sudo[63009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:01 compute-0 python3.9[63011]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:42:02 compute-0 systemd[1]: Reloading.
Jan 21 13:42:02 compute-0 systemd-rc-local-generator[63041]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:42:02 compute-0 systemd-sysv-generator[63044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:42:02 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 21 13:42:02 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 21 13:42:03 compute-0 sudo[63009]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:03 compute-0 sudo[63201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otchfnnvddemkigjbiarwqnkjamrfgcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002923.1907184-228-121959617753429/AnsiballZ_command.py'
Jan 21 13:42:03 compute-0 sudo[63201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:03 compute-0 python3.9[63203]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:03 compute-0 sudo[63201]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:04 compute-0 sudo[63354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdumxeyyskwdxwygmsksoekceaxhxjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002924.2587485-242-182667242178865/AnsiballZ_stat.py'
Jan 21 13:42:04 compute-0 sudo[63354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:04 compute-0 python3.9[63356]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:04 compute-0 sudo[63354]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:05 compute-0 sudo[63479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaaaicutnhoqikkthlbvxbznpmxohgrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002924.2587485-242-182667242178865/AnsiballZ_copy.py'
Jan 21 13:42:05 compute-0 sudo[63479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:05 compute-0 python3.9[63481]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002924.2587485-242-182667242178865/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:05 compute-0 sudo[63479]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:05 compute-0 sudo[63632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bslovgaahgueddvhngbrjocafdpdoywi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002925.6239624-257-49751229074452/AnsiballZ_systemd.py'
Jan 21 13:42:05 compute-0 sudo[63632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:06 compute-0 python3.9[63634]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:42:06 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 21 13:42:06 compute-0 sshd[1003]: Received SIGHUP; restarting.
Jan 21 13:42:06 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 21 13:42:06 compute-0 sshd[1003]: Server listening on 0.0.0.0 port 22.
Jan 21 13:42:06 compute-0 sshd[1003]: Server listening on :: port 22.
Jan 21 13:42:06 compute-0 sudo[63632]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:06 compute-0 sudo[63788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhxjjysknsbjlegqpjuxmllvkelpwpuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002926.5472546-265-61637166338387/AnsiballZ_file.py'
Jan 21 13:42:06 compute-0 sudo[63788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:07 compute-0 python3.9[63790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:07 compute-0 sudo[63788]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:07 compute-0 sudo[63940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrgxvwjrukycxebvorzwekbmcdbodxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002927.289402-273-275917112385717/AnsiballZ_stat.py'
Jan 21 13:42:07 compute-0 sudo[63940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:07 compute-0 python3.9[63942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:07 compute-0 sudo[63940]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:08 compute-0 sudo[64063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csqqloefvozizsixyteyuoknvogptkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002927.289402-273-275917112385717/AnsiballZ_copy.py'
Jan 21 13:42:08 compute-0 sudo[64063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:08 compute-0 python3.9[64065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002927.289402-273-275917112385717/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:08 compute-0 sudo[64063]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:09 compute-0 sudo[64215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdmoumkvgkjczykgmccyquhbzbcpekrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002928.7028558-291-36275868534168/AnsiballZ_timezone.py'
Jan 21 13:42:09 compute-0 sudo[64215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:09 compute-0 python3.9[64217]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 13:42:09 compute-0 systemd[1]: Starting Time & Date Service...
Jan 21 13:42:09 compute-0 systemd[1]: Started Time & Date Service.
Jan 21 13:42:09 compute-0 sudo[64215]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:10 compute-0 sudo[64371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snhverqyjqumzaaypknjqbjcunytnnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002929.7718453-300-88946585324019/AnsiballZ_file.py'
Jan 21 13:42:10 compute-0 sudo[64371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:10 compute-0 python3.9[64373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:10 compute-0 sudo[64371]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:10 compute-0 sudo[64523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uubybiugnttkiavvrrztyclfptpwypiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002930.4055378-308-133535548911999/AnsiballZ_stat.py'
Jan 21 13:42:10 compute-0 sudo[64523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:10 compute-0 python3.9[64525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:10 compute-0 sudo[64523]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:11 compute-0 sudo[64646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvevpnnovzhbqbofwyiustffezbrmaig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002930.4055378-308-133535548911999/AnsiballZ_copy.py'
Jan 21 13:42:11 compute-0 sudo[64646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:11 compute-0 python3.9[64648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002930.4055378-308-133535548911999/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:11 compute-0 sudo[64646]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:11 compute-0 sudo[64798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clsziwfzqagbbxattpludwjsgqffojmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002931.6417792-323-132849077620096/AnsiballZ_stat.py'
Jan 21 13:42:11 compute-0 sudo[64798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:12 compute-0 python3.9[64800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:12 compute-0 sudo[64798]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:12 compute-0 sudo[64921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxpujomhqetlvzckcgbybmulacsvcmzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002931.6417792-323-132849077620096/AnsiballZ_copy.py'
Jan 21 13:42:12 compute-0 sudo[64921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:12 compute-0 python3.9[64923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769002931.6417792-323-132849077620096/.source.yaml _original_basename=.55wamjgi follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:12 compute-0 sudo[64921]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:13 compute-0 sudo[65073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcipfafguonygmdlkskwklozbjltugo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002932.8544264-338-84309408430547/AnsiballZ_stat.py'
Jan 21 13:42:13 compute-0 sudo[65073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:13 compute-0 python3.9[65075]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:13 compute-0 sudo[65073]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:13 compute-0 sudo[65196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufbutitaroybcwbdpgotfihvbtpnzoey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002932.8544264-338-84309408430547/AnsiballZ_copy.py'
Jan 21 13:42:13 compute-0 sudo[65196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:13 compute-0 python3.9[65198]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002932.8544264-338-84309408430547/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:13 compute-0 sudo[65196]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:14 compute-0 sudo[65348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysrazatqobpsbetngpzbyopgtyewgnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002934.1647885-353-80088549570731/AnsiballZ_command.py'
Jan 21 13:42:14 compute-0 sudo[65348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:14 compute-0 python3.9[65350]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:14 compute-0 sudo[65348]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:15 compute-0 sudo[65501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdmqbawaduhqgnfkygyutiaapjzhossq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002934.8375769-361-38411005977375/AnsiballZ_command.py'
Jan 21 13:42:15 compute-0 sudo[65501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:15 compute-0 python3.9[65503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:15 compute-0 sudo[65501]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:16 compute-0 sudo[65654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gajgvotnttpaeqagkeezazdovfqgolyl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769002935.5683305-369-140995306175175/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 13:42:16 compute-0 sudo[65654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:16 compute-0 python3[65656]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 13:42:16 compute-0 sudo[65654]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:16 compute-0 sudo[65806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhupfptwozlwhgvcqnqjkxtprnmlxidg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002936.4407222-377-105466639307213/AnsiballZ_stat.py'
Jan 21 13:42:16 compute-0 sudo[65806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:16 compute-0 python3.9[65808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:16 compute-0 sudo[65806]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:17 compute-0 sudo[65929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sclzbzwcriwcetfyssnscvhqqwhikpwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002936.4407222-377-105466639307213/AnsiballZ_copy.py'
Jan 21 13:42:17 compute-0 sudo[65929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:17 compute-0 python3.9[65931]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002936.4407222-377-105466639307213/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:17 compute-0 sudo[65929]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:18 compute-0 sudo[66081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldacpcdpdhlggvxoguvyzbxahzqcvgvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002937.732308-392-139622077670560/AnsiballZ_stat.py'
Jan 21 13:42:18 compute-0 sudo[66081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:18 compute-0 python3.9[66083]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:18 compute-0 sudo[66081]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:18 compute-0 sudo[66204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxpxmajzmovpjbxanmkbltirisduezsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002937.732308-392-139622077670560/AnsiballZ_copy.py'
Jan 21 13:42:18 compute-0 sudo[66204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:18 compute-0 python3.9[66206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002937.732308-392-139622077670560/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:18 compute-0 sudo[66204]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:19 compute-0 sudo[66356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdohiovdlgoovkjetkitmquggspqjrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002939.038404-407-232194023447796/AnsiballZ_stat.py'
Jan 21 13:42:19 compute-0 sudo[66356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:19 compute-0 python3.9[66358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:19 compute-0 sudo[66356]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:19 compute-0 sudo[66479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbsnoymrugswawqhbfgzizzhzoyidctk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002939.038404-407-232194023447796/AnsiballZ_copy.py'
Jan 21 13:42:19 compute-0 sudo[66479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:20 compute-0 python3.9[66481]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002939.038404-407-232194023447796/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:20 compute-0 sudo[66479]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:20 compute-0 sudo[66631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyjaphkcjcwjiubffqczbwzwxwwuhwqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002940.3012276-422-25199978685171/AnsiballZ_stat.py'
Jan 21 13:42:20 compute-0 sudo[66631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:20 compute-0 python3.9[66633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:20 compute-0 sudo[66631]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:21 compute-0 sudo[66754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhakytysdtsnmnpelmkbrgeynseaijvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002940.3012276-422-25199978685171/AnsiballZ_copy.py'
Jan 21 13:42:21 compute-0 sudo[66754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:21 compute-0 python3.9[66756]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002940.3012276-422-25199978685171/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:21 compute-0 sudo[66754]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:21 compute-0 sudo[66906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptlrhvedpocggmiedptkuwvohugueaqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002941.4947796-437-280791997110998/AnsiballZ_stat.py'
Jan 21 13:42:21 compute-0 sudo[66906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:21 compute-0 python3.9[66908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:42:21 compute-0 sudo[66906]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:22 compute-0 sudo[67029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyeicwazyeeesootxtkzhtovuiltmlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002941.4947796-437-280791997110998/AnsiballZ_copy.py'
Jan 21 13:42:22 compute-0 sudo[67029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:22 compute-0 python3.9[67031]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769002941.4947796-437-280791997110998/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:22 compute-0 sudo[67029]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:22 compute-0 sudo[67181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjaoqmnedopgqnvxalmkdwamknstoxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002942.6974869-452-27240188669610/AnsiballZ_file.py'
Jan 21 13:42:22 compute-0 sudo[67181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:23 compute-0 python3.9[67183]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:23 compute-0 sudo[67181]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:23 compute-0 sudo[67333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caktzegseghobxbhvaeacnoajttnhltd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002943.3489153-460-197181187401060/AnsiballZ_command.py'
Jan 21 13:42:23 compute-0 sudo[67333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:23 compute-0 python3.9[67335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:23 compute-0 sudo[67333]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:24 compute-0 sudo[67492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlvbdsnqbyfwndqluhbzgpuzmuqcvdhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002944.099256-468-260479065341763/AnsiballZ_blockinfile.py'
Jan 21 13:42:24 compute-0 sudo[67492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:24 compute-0 python3.9[67494]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:24 compute-0 sudo[67492]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:25 compute-0 sudo[67645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkczhiudxqydcgtrbiqnjytoluaviwxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002945.0409305-477-86309303277396/AnsiballZ_file.py'
Jan 21 13:42:25 compute-0 sudo[67645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:25 compute-0 python3.9[67647]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:25 compute-0 sudo[67645]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:25 compute-0 sudo[67797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgakqmmjphcbqmvwyaloxsivbwlibqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002945.696485-477-66826213042153/AnsiballZ_file.py'
Jan 21 13:42:25 compute-0 sudo[67797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:26 compute-0 python3.9[67799]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:26 compute-0 sudo[67797]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:26 compute-0 sudo[67949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qazgzboywsclltgaxiiwwroxcwfpufec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002946.337588-492-31144247280491/AnsiballZ_mount.py'
Jan 21 13:42:26 compute-0 sudo[67949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:27 compute-0 python3.9[67951]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 13:42:27 compute-0 sudo[67949]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:27 compute-0 sudo[68102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqvkpkqwwawlynjnslkraqjyakorewwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002947.2731664-492-115596280431669/AnsiballZ_mount.py'
Jan 21 13:42:27 compute-0 sudo[68102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:27 compute-0 python3.9[68104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 13:42:27 compute-0 sudo[68102]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:28 compute-0 sshd-session[58902]: Connection closed by 192.168.122.30 port 40664
Jan 21 13:42:28 compute-0 sshd-session[58899]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:42:28 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 21 13:42:28 compute-0 systemd[1]: session-14.scope: Consumed 37.863s CPU time.
Jan 21 13:42:28 compute-0 systemd-logind[780]: Session 14 logged out. Waiting for processes to exit.
Jan 21 13:42:28 compute-0 systemd-logind[780]: Removed session 14.
Jan 21 13:42:32 compute-0 sshd-session[68130]: Accepted publickey for zuul from 192.168.122.30 port 49292 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:42:32 compute-0 systemd-logind[780]: New session 15 of user zuul.
Jan 21 13:42:32 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 21 13:42:32 compute-0 sshd-session[68130]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:42:33 compute-0 sudo[68283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfxzzlvchihjazdqhvyijspjshqwwtfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002952.9921622-16-227656292268850/AnsiballZ_tempfile.py'
Jan 21 13:42:33 compute-0 sudo[68283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:33 compute-0 python3.9[68285]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 13:42:33 compute-0 sudo[68283]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:34 compute-0 sudo[68435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wleztxaxnduuhzrdrxmrehqpvxflmrgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002953.8526316-28-235597975397324/AnsiballZ_stat.py'
Jan 21 13:42:34 compute-0 sudo[68435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:34 compute-0 python3.9[68437]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:42:34 compute-0 sudo[68435]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:35 compute-0 sudo[68587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnfpofvynmdjzvvdbwqfrmgjlldptms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002954.695823-38-104880430766605/AnsiballZ_setup.py'
Jan 21 13:42:35 compute-0 sudo[68587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:35 compute-0 python3.9[68589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:42:35 compute-0 sudo[68587]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:36 compute-0 sudo[68739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdphyrrsrkqsquxcjaqkmmsyxirtrykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002955.7689323-47-107145261286462/AnsiballZ_blockinfile.py'
Jan 21 13:42:36 compute-0 sudo[68739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:36 compute-0 python3.9[68741]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeFBF9sLBUut0jERuw8eMRSTmHQPq77CYOZnLVmOaBCBCSPbeUxgTSDGAypqgANDFspz2HthTRfZ/0obiaSrheRKp8JI8vmjOkZpbGmM9pA3z2/L+A3dJtYryJ7HhNyc/RGv6tDqg7CqaPNO1VlKkJaCblvoGA/sTsuLgg72/kyPlgz+xxZIIXUolJRTelowGJeLl4FZhJevZEH/0RgRZW5SIe7QgvHYRWR/yATnINpKKPRydWLgea+k//th3RGx9GuUGWuDCPeJvxRKrqAMI8uxmSm/8+i6EK0vVqkOdcdQRVsHY2r6DJ55kbxKE6zwdr/2TWUC4j2L+d8AvLLtPL6yx6yOUDHD9KicyxruiQYYwkskMnkAWJeSL1egxNDFgJCw7P56bEGIyFhPIAzxR1E0ZuAQqv/W1KYFqspYxqjsccWFRon0TW3DyHzXSXRZkvgVBAyZPlZBTcsw58X536t/6unFkYBPfaCNmQIGhaOZ0dFgK7Bl1Jj1cThi6d/bE=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINb+axAz9AQLLF8DlI2l4unh/lYce78aEpf6RASalCvh
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJ6/CEvuTJeUBrk8Nw85tSdtMYRRRBEbjPN601M+Wvbkfd6a4tr5R6VV6/ot3jZ0PwT+0BaXWVuiTlpRpxsLDo=
                                             create=True mode=0644 path=/tmp/ansible.tr3wypvt state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:36 compute-0 sudo[68739]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:37 compute-0 sudo[68891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njetnxhtjonfrlhzfiyiemznxmdxgxzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002956.6198957-55-280944222176245/AnsiballZ_command.py'
Jan 21 13:42:37 compute-0 sudo[68891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:37 compute-0 python3.9[68893]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tr3wypvt' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:37 compute-0 sudo[68891]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:38 compute-0 sudo[69045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzswjxjedbdivbxufrgkhiesrqlavetq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002957.7939963-63-61440122691173/AnsiballZ_file.py'
Jan 21 13:42:38 compute-0 sudo[69045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:38 compute-0 python3.9[69047]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tr3wypvt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:38 compute-0 sudo[69045]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:38 compute-0 sshd-session[68133]: Connection closed by 192.168.122.30 port 49292
Jan 21 13:42:38 compute-0 sshd-session[68130]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:42:38 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 21 13:42:38 compute-0 systemd[1]: session-15.scope: Consumed 3.540s CPU time.
Jan 21 13:42:38 compute-0 systemd-logind[780]: Session 15 logged out. Waiting for processes to exit.
Jan 21 13:42:38 compute-0 systemd-logind[780]: Removed session 15.
Jan 21 13:42:39 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 13:42:44 compute-0 sshd-session[69074]: Accepted publickey for zuul from 192.168.122.30 port 44820 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:42:44 compute-0 systemd-logind[780]: New session 16 of user zuul.
Jan 21 13:42:44 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 21 13:42:44 compute-0 sshd-session[69074]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:42:45 compute-0 python3.9[69227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:42:46 compute-0 sudo[69381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcssrejmjyiqerkzkprpaxaekgiczrim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002965.7257454-27-71151046988750/AnsiballZ_systemd.py'
Jan 21 13:42:46 compute-0 sudo[69381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:46 compute-0 python3.9[69383]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 13:42:46 compute-0 sudo[69381]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:47 compute-0 sudo[69535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhimsbdnifrshylgglnvkguuauzhrdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002966.839711-35-134957795300196/AnsiballZ_systemd.py'
Jan 21 13:42:47 compute-0 sudo[69535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:47 compute-0 python3.9[69537]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:42:47 compute-0 sudo[69535]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:48 compute-0 sudo[69688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofxxdrwijeicemyoumroshpuyewximwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002967.7277381-44-50563197745386/AnsiballZ_command.py'
Jan 21 13:42:48 compute-0 sudo[69688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:48 compute-0 python3.9[69690]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:48 compute-0 sudo[69688]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:49 compute-0 sudo[69841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtcpmibefbfthfswskedokrtjtaijwbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002968.6198528-52-120346799126533/AnsiballZ_stat.py'
Jan 21 13:42:49 compute-0 sudo[69841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:49 compute-0 python3.9[69843]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:42:49 compute-0 sudo[69841]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:49 compute-0 sudo[69995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrraupujksgmzmbvcwhixlsktxendhep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002969.466257-60-17925911189178/AnsiballZ_command.py'
Jan 21 13:42:49 compute-0 sudo[69995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:50 compute-0 python3.9[69997]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:42:50 compute-0 sudo[69995]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:50 compute-0 sudo[70150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjcllkvrdtohpecnesqoyupqvpbaaynn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002970.32478-68-213547374371103/AnsiballZ_file.py'
Jan 21 13:42:50 compute-0 sudo[70150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:50 compute-0 python3.9[70152]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:42:50 compute-0 sudo[70150]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:51 compute-0 sshd-session[69077]: Connection closed by 192.168.122.30 port 44820
Jan 21 13:42:51 compute-0 sshd-session[69074]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:42:51 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 21 13:42:51 compute-0 systemd[1]: session-16.scope: Consumed 4.751s CPU time.
Jan 21 13:42:51 compute-0 systemd-logind[780]: Session 16 logged out. Waiting for processes to exit.
Jan 21 13:42:51 compute-0 systemd-logind[780]: Removed session 16.
Jan 21 13:42:56 compute-0 sshd-session[70178]: Accepted publickey for zuul from 192.168.122.30 port 45958 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:42:56 compute-0 systemd-logind[780]: New session 17 of user zuul.
Jan 21 13:42:56 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 21 13:42:56 compute-0 sshd-session[70178]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:42:57 compute-0 python3.9[70331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:42:58 compute-0 sudo[70485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoekrvonovxrejeyxyaqndcnddojvcso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002977.8158567-29-160635409310763/AnsiballZ_setup.py'
Jan 21 13:42:58 compute-0 sudo[70485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:58 compute-0 python3.9[70487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:42:58 compute-0 sudo[70485]: pam_unix(sudo:session): session closed for user root
Jan 21 13:42:59 compute-0 sudo[70569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kggltkkphmlkodfdalaeywnwsrqqlyhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769002977.8158567-29-160635409310763/AnsiballZ_dnf.py'
Jan 21 13:42:59 compute-0 sudo[70569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:42:59 compute-0 python3.9[70571]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 13:43:00 compute-0 sudo[70569]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:01 compute-0 python3.9[70722]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:02 compute-0 python3.9[70873]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 13:43:03 compute-0 python3.9[71023]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:43:03 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:43:03 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:43:04 compute-0 python3.9[71174]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:43:04 compute-0 sshd-session[70181]: Connection closed by 192.168.122.30 port 45958
Jan 21 13:43:04 compute-0 sshd-session[70178]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:43:04 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 21 13:43:04 compute-0 systemd[1]: session-17.scope: Consumed 6.076s CPU time.
Jan 21 13:43:04 compute-0 systemd-logind[780]: Session 17 logged out. Waiting for processes to exit.
Jan 21 13:43:04 compute-0 systemd-logind[780]: Removed session 17.
Jan 21 13:43:12 compute-0 sshd-session[71199]: Accepted publickey for zuul from 38.102.83.129 port 39144 ssh2: RSA SHA256:554VC9nlbLKS9dRb6a/TnBIuiyV41v4wVIBzdCoA//M
Jan 21 13:43:12 compute-0 systemd-logind[780]: New session 18 of user zuul.
Jan 21 13:43:12 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 21 13:43:12 compute-0 sshd-session[71199]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:43:13 compute-0 sudo[71275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epsanleounyskmkojvgftchucsvdjkvz ; /usr/bin/python3'
Jan 21 13:43:13 compute-0 sudo[71275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:13 compute-0 useradd[71279]: new group: name=ceph-admin, GID=42478
Jan 21 13:43:13 compute-0 useradd[71279]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 21 13:43:14 compute-0 sudo[71275]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:14 compute-0 sudo[71361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvqioevgtqxhamutxhcsdywbikepppcv ; /usr/bin/python3'
Jan 21 13:43:14 compute-0 sudo[71361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:14 compute-0 sudo[71361]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:14 compute-0 sudo[71434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rocaazkgwnessvetllluokrhylrptuxn ; /usr/bin/python3'
Jan 21 13:43:14 compute-0 sudo[71434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:15 compute-0 sudo[71434]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:15 compute-0 sudo[71484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrfylsfzipsdiajywkmsepabhewaygtc ; /usr/bin/python3'
Jan 21 13:43:15 compute-0 sudo[71484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:15 compute-0 sudo[71484]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:15 compute-0 sudo[71510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsicijcvpdsgoozzraswgfrkooipvpq ; /usr/bin/python3'
Jan 21 13:43:15 compute-0 sudo[71510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:15 compute-0 sudo[71510]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:16 compute-0 sudo[71536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zibdukcmdriusctzqnrqthosvvospzmm ; /usr/bin/python3'
Jan 21 13:43:16 compute-0 sudo[71536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:16 compute-0 sudo[71536]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:16 compute-0 sudo[71562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudomgssnouhjxxlynqkrrpkzbthugmp ; /usr/bin/python3'
Jan 21 13:43:16 compute-0 sudo[71562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:16 compute-0 sudo[71562]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:17 compute-0 sudo[71640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzllnulikldzzorklhylnahklcyjnwq ; /usr/bin/python3'
Jan 21 13:43:17 compute-0 sudo[71640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:17 compute-0 sudo[71640]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:17 compute-0 sudo[71713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pknbsrzamjznyhsxyffspxuwtrxglvxs ; /usr/bin/python3'
Jan 21 13:43:17 compute-0 sudo[71713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:17 compute-0 sudo[71713]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:17 compute-0 sudo[71815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpngpzinpfdhvmtfbzdhgmoyqrvwbipl ; /usr/bin/python3'
Jan 21 13:43:17 compute-0 sudo[71815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:18 compute-0 sudo[71815]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:18 compute-0 sudo[71888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwczylzhblobkbnuwpgdkcnwzakxivyl ; /usr/bin/python3'
Jan 21 13:43:18 compute-0 sudo[71888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:18 compute-0 sudo[71888]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:18 compute-0 sudo[71938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veqdgssekmntneibfujpwtmogbpslyqb ; /usr/bin/python3'
Jan 21 13:43:18 compute-0 sudo[71938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:19 compute-0 python3[71940]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:43:20 compute-0 sudo[71938]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:20 compute-0 sudo[72033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idbjglqyhdxbrnlvcadwttpvdswbffhj ; /usr/bin/python3'
Jan 21 13:43:20 compute-0 sudo[72033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:20 compute-0 python3[72035]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 13:43:22 compute-0 sudo[72033]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:22 compute-0 sudo[72060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmhfvevieybilcztoypxjakspdtfsipv ; /usr/bin/python3'
Jan 21 13:43:22 compute-0 sudo[72060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:22 compute-0 python3[72062]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:22 compute-0 sudo[72060]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:22 compute-0 sudo[72086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrpeveldfhjumhfjjkzixtgjjwnladag ; /usr/bin/python3'
Jan 21 13:43:22 compute-0 sudo[72086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:22 compute-0 python3[72088]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:22 compute-0 kernel: loop: module loaded
Jan 21 13:43:22 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 21 13:43:22 compute-0 sudo[72086]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:23 compute-0 sudo[72121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qysgmaikbzwmgdarfqohvzoexhzlktsr ; /usr/bin/python3'
Jan 21 13:43:23 compute-0 sudo[72121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:23 compute-0 python3[72123]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:23 compute-0 lvm[72126]: PV /dev/loop3 not used.
Jan 21 13:43:23 compute-0 lvm[72128]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:43:23 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 21 13:43:23 compute-0 lvm[72137]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 21 13:43:23 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 21 13:43:23 compute-0 sudo[72121]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:23 compute-0 sudo[72213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovmubckjkggpkrichkqgrluskazhavpa ; /usr/bin/python3'
Jan 21 13:43:23 compute-0 sudo[72213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:23 compute-0 python3[72215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:43:23 compute-0 sudo[72213]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:24 compute-0 sudo[72286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lacdqshbxegfphslxkvgytiafhnltxyl ; /usr/bin/python3'
Jan 21 13:43:24 compute-0 sudo[72286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:24 compute-0 python3[72288]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003003.6257122-36216-12752240275197/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:24 compute-0 sudo[72286]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:24 compute-0 sudo[72336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wulpookcjoxlkjqdxnzgkbqbicgmdbez ; /usr/bin/python3'
Jan 21 13:43:24 compute-0 sudo[72336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:25 compute-0 python3[72338]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:43:25 compute-0 systemd[1]: Reloading.
Jan 21 13:43:25 compute-0 systemd-rc-local-generator[72368]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:43:25 compute-0 systemd-sysv-generator[72371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:43:25 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 21 13:43:25 compute-0 bash[72378]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 21 13:43:25 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 21 13:43:25 compute-0 lvm[72379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:43:25 compute-0 sudo[72336]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:25 compute-0 lvm[72379]: VG ceph_vg0 finished
Jan 21 13:43:25 compute-0 sudo[72403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmdxgaesxtluvtldtoajwocthmjnjudr ; /usr/bin/python3'
Jan 21 13:43:25 compute-0 sudo[72403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:25 compute-0 python3[72405]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 13:43:27 compute-0 sudo[72403]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:27 compute-0 sudo[72430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxpjgkpsgzlhkbqkztvbrtpaqrpzzgxb ; /usr/bin/python3'
Jan 21 13:43:27 compute-0 sudo[72430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:27 compute-0 python3[72432]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:27 compute-0 sudo[72430]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:27 compute-0 sudo[72456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxdtiyitfjghykimpvhgtkeykvvgxpge ; /usr/bin/python3'
Jan 21 13:43:27 compute-0 sudo[72456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:27 compute-0 python3[72458]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:28 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 21 13:43:28 compute-0 sudo[72456]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:28 compute-0 sudo[72488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlrabskdebjzllmkxqovwdczslqrwfh ; /usr/bin/python3'
Jan 21 13:43:28 compute-0 sudo[72488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:28 compute-0 python3[72490]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:28 compute-0 lvm[72493]: PV /dev/loop4 not used.
Jan 21 13:43:28 compute-0 lvm[72503]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:43:28 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 21 13:43:28 compute-0 lvm[72505]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 21 13:43:28 compute-0 sudo[72488]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:28 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 21 13:43:29 compute-0 sudo[72581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-magtauccahfvanjbliijnxqmomzvaryo ; /usr/bin/python3'
Jan 21 13:43:29 compute-0 sudo[72581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:29 compute-0 python3[72583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:43:29 compute-0 sudo[72581]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:29 compute-0 sudo[72654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eescrcwhkcearzexajwiaakmgzqcscwk ; /usr/bin/python3'
Jan 21 13:43:29 compute-0 sudo[72654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:29 compute-0 python3[72656]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003008.9603553-36243-33519948277523/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:29 compute-0 sudo[72654]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:29 compute-0 sudo[72704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqfpbhwqicqjshasulpsmpqixrpfjgu ; /usr/bin/python3'
Jan 21 13:43:29 compute-0 sudo[72704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:30 compute-0 python3[72706]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:43:30 compute-0 systemd[1]: Reloading.
Jan 21 13:43:30 compute-0 systemd-sysv-generator[72739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:43:30 compute-0 systemd-rc-local-generator[72736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:43:30 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 21 13:43:30 compute-0 bash[72746]: /dev/loop4: [64513]:4579793 (/var/lib/ceph-osd-1.img)
Jan 21 13:43:30 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 21 13:43:30 compute-0 sudo[72704]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:30 compute-0 lvm[72747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:43:30 compute-0 lvm[72747]: VG ceph_vg1 finished
Jan 21 13:43:30 compute-0 sudo[72771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjtshcnghfuynrmkavtfnachxmawpny ; /usr/bin/python3'
Jan 21 13:43:30 compute-0 sudo[72771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:30 compute-0 python3[72773]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 13:43:31 compute-0 chronyd[58418]: Selected source 23.133.168.246 (pool.ntp.org)
Jan 21 13:43:32 compute-0 sudo[72771]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:32 compute-0 sudo[72798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucowjdyxnxiuiwynaipqbzfceowjlsax ; /usr/bin/python3'
Jan 21 13:43:32 compute-0 sudo[72798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:32 compute-0 python3[72800]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:32 compute-0 sudo[72798]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:32 compute-0 sudo[72824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyoskjcwgfnyjoxferjccqmqsqwqvuwr ; /usr/bin/python3'
Jan 21 13:43:32 compute-0 sudo[72824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:32 compute-0 python3[72826]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:32 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 21 13:43:32 compute-0 sudo[72824]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:32 compute-0 sudo[72856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hehzkbffslpucxlbtthrswdyrtchfowc ; /usr/bin/python3'
Jan 21 13:43:32 compute-0 sudo[72856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:33 compute-0 python3[72858]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:33 compute-0 lvm[72861]: PV /dev/loop5 not used.
Jan 21 13:43:33 compute-0 lvm[72863]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:43:33 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 21 13:43:33 compute-0 lvm[72867]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 21 13:43:33 compute-0 lvm[72873]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:43:33 compute-0 lvm[72873]: VG ceph_vg2 finished
Jan 21 13:43:33 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 21 13:43:33 compute-0 sudo[72856]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:33 compute-0 sudo[72949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpkkgwhrkoyaqvhxlhpduhvblauftmxe ; /usr/bin/python3'
Jan 21 13:43:33 compute-0 sudo[72949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:33 compute-0 python3[72951]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:43:33 compute-0 sudo[72949]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:34 compute-0 sudo[73022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkhnncncdnsolstgvwrxcnvrnygboxur ; /usr/bin/python3'
Jan 21 13:43:34 compute-0 sudo[73022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:34 compute-0 python3[73024]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003013.5964992-36270-127594309988560/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:34 compute-0 sudo[73022]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:34 compute-0 sudo[73072]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqdtqbxxgtcymusrtziciimnexcgpcgt ; /usr/bin/python3'
Jan 21 13:43:34 compute-0 sudo[73072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:34 compute-0 python3[73074]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:43:34 compute-0 systemd[1]: Reloading.
Jan 21 13:43:34 compute-0 systemd-sysv-generator[73102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:43:34 compute-0 systemd-rc-local-generator[73098]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:43:35 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 21 13:43:35 compute-0 bash[73114]: /dev/loop5: [64513]:4579797 (/var/lib/ceph-osd-2.img)
Jan 21 13:43:35 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 21 13:43:35 compute-0 sudo[73072]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:35 compute-0 lvm[73115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:43:35 compute-0 lvm[73115]: VG ceph_vg2 finished
Jan 21 13:43:37 compute-0 python3[73139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:43:39 compute-0 sudo[73230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nieparhgyuqlpocqfhuhzuredoqtfolr ; /usr/bin/python3'
Jan 21 13:43:39 compute-0 sudo[73230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:39 compute-0 python3[73232]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 13:43:41 compute-0 sudo[73230]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:41 compute-0 sudo[73287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywhzbbmlaauwwxkimuktfpfisnojmii ; /usr/bin/python3'
Jan 21 13:43:41 compute-0 sudo[73287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:41 compute-0 python3[73289]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 21 13:43:44 compute-0 groupadd[73299]: group added to /etc/group: name=cephadm, GID=993
Jan 21 13:43:44 compute-0 groupadd[73299]: group added to /etc/gshadow: name=cephadm
Jan 21 13:43:44 compute-0 groupadd[73299]: new group: name=cephadm, GID=993
Jan 21 13:43:44 compute-0 useradd[73306]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 21 13:43:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:43:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:43:45 compute-0 sudo[73287]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:43:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:43:45 compute-0 systemd[1]: run-r6103b12748e64ce5a9ecd0bf27261141.service: Deactivated successfully.
Jan 21 13:43:45 compute-0 sudo[73406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcsiojndwgfkmviwxojbvdjpubdmczbm ; /usr/bin/python3'
Jan 21 13:43:45 compute-0 sudo[73406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:45 compute-0 python3[73409]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:45 compute-0 sudo[73406]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:46 compute-0 sudo[73435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glfriscdwvallfslatnwxqzadikcxqtc ; /usr/bin/python3'
Jan 21 13:43:46 compute-0 sudo[73435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:46 compute-0 python3[73437]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:43:46 compute-0 sudo[73435]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:47 compute-0 sudo[73475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjwiojyzvqxkwyeximnuabpbajkozans ; /usr/bin/python3'
Jan 21 13:43:47 compute-0 sudo[73475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:47 compute-0 python3[73477]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:47 compute-0 sudo[73475]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:47 compute-0 sudo[73501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrmkgmxurakazcazztgogpyxkjshnzu ; /usr/bin/python3'
Jan 21 13:43:47 compute-0 sudo[73501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:47 compute-0 python3[73503]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:47 compute-0 sudo[73501]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:48 compute-0 sudo[73579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgvqvhstbclisapetgvadniwaxqxytuv ; /usr/bin/python3'
Jan 21 13:43:48 compute-0 sudo[73579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:48 compute-0 python3[73581]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:43:48 compute-0 sudo[73579]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:48 compute-0 sudo[73652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufoimujeayxrzlpacljkfzriweudbxbj ; /usr/bin/python3'
Jan 21 13:43:48 compute-0 sudo[73652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:48 compute-0 python3[73654]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003027.9202132-36418-59116425658073/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:48 compute-0 sudo[73652]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:49 compute-0 sudo[73754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmuqpeubxbiilkwzlmuthoycfmtrshy ; /usr/bin/python3'
Jan 21 13:43:49 compute-0 sudo[73754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:49 compute-0 python3[73756]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:43:49 compute-0 sudo[73754]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:49 compute-0 sudo[73827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grmovorlefvuhpabinnomqxhtietdlmr ; /usr/bin/python3'
Jan 21 13:43:49 compute-0 sudo[73827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:49 compute-0 python3[73829]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003029.1116705-36436-161523195250225/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:43:49 compute-0 sudo[73827]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:50 compute-0 sudo[73877]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnomvwhpyejeaoecjgcrerpygvvtghnj ; /usr/bin/python3'
Jan 21 13:43:50 compute-0 sudo[73877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:50 compute-0 python3[73879]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:50 compute-0 sudo[73877]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:50 compute-0 sudo[73905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhmlnnwalopocrilbjvyuubngjquhvp ; /usr/bin/python3'
Jan 21 13:43:50 compute-0 sudo[73905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:50 compute-0 python3[73907]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:50 compute-0 sudo[73905]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:50 compute-0 sudo[73933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmkixgfhkasmdjlqdrobqkxhoszoyznz ; /usr/bin/python3'
Jan 21 13:43:50 compute-0 sudo[73933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:50 compute-0 python3[73935]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:50 compute-0 sudo[73933]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:51 compute-0 python3[73961]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:43:51 compute-0 sudo[73985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waatdjgaaogilsdffgijmivxhxwlipqe ; /usr/bin/python3'
Jan 21 13:43:51 compute-0 sudo[73985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:43:51 compute-0 python3[73987]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:43:51 compute-0 sshd-session[73991]: Accepted publickey for ceph-admin from 192.168.122.100 port 46368 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:43:51 compute-0 systemd-logind[780]: New session 19 of user ceph-admin.
Jan 21 13:43:51 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 13:43:51 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 13:43:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 13:43:52 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 21 13:43:52 compute-0 systemd[73995]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:43:52 compute-0 systemd[73995]: Queued start job for default target Main User Target.
Jan 21 13:43:52 compute-0 systemd[73995]: Created slice User Application Slice.
Jan 21 13:43:52 compute-0 systemd[73995]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 13:43:52 compute-0 systemd[73995]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 13:43:52 compute-0 systemd[73995]: Reached target Paths.
Jan 21 13:43:52 compute-0 systemd[73995]: Reached target Timers.
Jan 21 13:43:52 compute-0 systemd[73995]: Starting D-Bus User Message Bus Socket...
Jan 21 13:43:52 compute-0 systemd[73995]: Starting Create User's Volatile Files and Directories...
Jan 21 13:43:52 compute-0 systemd[73995]: Listening on D-Bus User Message Bus Socket.
Jan 21 13:43:52 compute-0 systemd[73995]: Reached target Sockets.
Jan 21 13:43:52 compute-0 systemd[73995]: Finished Create User's Volatile Files and Directories.
Jan 21 13:43:52 compute-0 systemd[73995]: Reached target Basic System.
Jan 21 13:43:52 compute-0 systemd[73995]: Reached target Main User Target.
Jan 21 13:43:52 compute-0 systemd[73995]: Startup finished in 771ms.
Jan 21 13:43:52 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 21 13:43:52 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 21 13:43:52 compute-0 sshd-session[73991]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:43:52 compute-0 sudo[74012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 21 13:43:52 compute-0 sudo[74012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:43:52 compute-0 sudo[74012]: pam_unix(sudo:session): session closed for user root
Jan 21 13:43:52 compute-0 sshd-session[74011]: Received disconnect from 192.168.122.100 port 46368:11: disconnected by user
Jan 21 13:43:52 compute-0 sshd-session[74011]: Disconnected from user ceph-admin 192.168.122.100 port 46368
Jan 21 13:43:52 compute-0 sshd-session[73991]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 21 13:43:52 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 21 13:43:52 compute-0 systemd-logind[780]: Session 19 logged out. Waiting for processes to exit.
Jan 21 13:43:52 compute-0 systemd-logind[780]: Removed session 19.
Jan 21 13:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat957476698-lower\x2dmapped.mount: Deactivated successfully.
Jan 21 13:44:02 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 21 13:44:03 compute-0 systemd[73995]: Activating special unit Exit the Session...
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped target Main User Target.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped target Basic System.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped target Paths.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped target Sockets.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped target Timers.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 13:44:03 compute-0 systemd[73995]: Closed D-Bus User Message Bus Socket.
Jan 21 13:44:03 compute-0 systemd[73995]: Stopped Create User's Volatile Files and Directories.
Jan 21 13:44:03 compute-0 systemd[73995]: Removed slice User Application Slice.
Jan 21 13:44:03 compute-0 systemd[73995]: Reached target Shutdown.
Jan 21 13:44:03 compute-0 systemd[73995]: Finished Exit the Session.
Jan 21 13:44:03 compute-0 systemd[73995]: Reached target Exit the Session.
Jan 21 13:44:03 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 21 13:44:03 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 21 13:44:03 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 21 13:44:03 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 21 13:44:03 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 21 13:44:03 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 21 13:44:03 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 21 13:44:11 compute-0 podman[74090]: 2026-01-21 13:44:11.71765793 +0000 UTC m=+18.513283389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:11 compute-0 podman[74155]: 2026-01-21 13:44:11.823357747 +0000 UTC m=+0.068269173 container create 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:11 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 21 13:44:11 compute-0 systemd[1]: Started libpod-conmon-7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687.scope.
Jan 21 13:44:11 compute-0 podman[74155]: 2026-01-21 13:44:11.793841 +0000 UTC m=+0.038752406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:11 compute-0 podman[74155]: 2026-01-21 13:44:11.976087787 +0000 UTC m=+0.220999283 container init 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:11 compute-0 podman[74155]: 2026-01-21 13:44:11.988650147 +0000 UTC m=+0.233561523 container start 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:44:11 compute-0 podman[74155]: 2026-01-21 13:44:11.994270472 +0000 UTC m=+0.239181878 container attach 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 13:44:12 compute-0 funny_torvalds[74171]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 21 13:44:12 compute-0 podman[74155]: 2026-01-21 13:44:12.10506107 +0000 UTC m=+0.349972516 container died 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:12 compute-0 systemd[1]: libpod-7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-34996be08746d30eaf0a2bdb9ba5253a5c5faff7e7f74469be4c4ac0e7708a17-merged.mount: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74155]: 2026-01-21 13:44:12.163381413 +0000 UTC m=+0.408292779 container remove 7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687 (image=quay.io/ceph/ceph:v20, name=funny_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:12 compute-0 systemd[1]: libpod-conmon-7038b556d3ac7759e057d3e717e0e59f7d7c1893d8f7457059a33eeee1f6b687.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.243105918 +0000 UTC m=+0.056306936 container create 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:12 compute-0 systemd[1]: Started libpod-conmon-1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302.scope.
Jan 21 13:44:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.213712036 +0000 UTC m=+0.026913134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.321263477 +0000 UTC m=+0.134464565 container init 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.333206163 +0000 UTC m=+0.146407191 container start 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:12 compute-0 interesting_lovelace[74205]: 167 167
Jan 21 13:44:12 compute-0 systemd[1]: libpod-1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.339899602 +0000 UTC m=+0.153100720 container attach 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.340486756 +0000 UTC m=+0.153687814 container died 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 13:44:12 compute-0 podman[74189]: 2026-01-21 13:44:12.393852242 +0000 UTC m=+0.207053260 container remove 1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302 (image=quay.io/ceph/ceph:v20, name=interesting_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:12 compute-0 systemd[1]: libpod-conmon-1df934cef3350fb6881e30f14db5a33635954033f0389d7a29c91c4f01e43302.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.468657609 +0000 UTC m=+0.050762034 container create 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:12 compute-0 systemd[1]: Started libpod-conmon-849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e.scope.
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.443890118 +0000 UTC m=+0.025994533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.562813591 +0000 UTC m=+0.144918016 container init 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.570077254 +0000 UTC m=+0.152181639 container start 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.573947457 +0000 UTC m=+0.156051882 container attach 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:44:12 compute-0 elegant_bhaskara[74241]: AQAs2HBpCipMJBAAyTFuLCl0vCqAzsT2sgexTg==
Jan 21 13:44:12 compute-0 systemd[1]: libpod-849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.615743285 +0000 UTC m=+0.197847730 container died 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 13:44:12 compute-0 podman[74224]: 2026-01-21 13:44:12.663204459 +0000 UTC m=+0.245308844 container remove 849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e (image=quay.io/ceph/ceph:v20, name=elegant_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:12 compute-0 systemd[1]: libpod-conmon-849cfcccc78378b4b5b0e8ecde0115c8cc79c96d03e32d03a6d64b3b26473a4e.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.744826101 +0000 UTC m=+0.051352569 container create 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 13:44:12 compute-0 systemd[1]: Started libpod-conmon-602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39.scope.
Jan 21 13:44:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.803970474 +0000 UTC m=+0.110496962 container init 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.811166896 +0000 UTC m=+0.117693354 container start 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.815487129 +0000 UTC m=+0.122013587 container attach 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.725180971 +0000 UTC m=+0.031707429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:12 compute-0 hopeful_matsumoto[74276]: AQAs2HBpQOhsMRAAPu2bYRFMkONDImVzcP6vng==
Jan 21 13:44:12 compute-0 systemd[1]: libpod-602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.832950487 +0000 UTC m=+0.139476955 container died 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a99926ebcedfe6c32fc300cd18298f898f559e0e8446d1e9840653e757e743cd-merged.mount: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74260]: 2026-01-21 13:44:12.873000674 +0000 UTC m=+0.179527172 container remove 602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39 (image=quay.io/ceph/ceph:v20, name=hopeful_matsumoto, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:12 compute-0 systemd[1]: libpod-conmon-602a6cd5170e67996b88662325bcbf6ddbb0f0f22e33565ed00c0e64e622eb39.scope: Deactivated successfully.
Jan 21 13:44:12 compute-0 podman[74293]: 2026-01-21 13:44:12.932599369 +0000 UTC m=+0.041584505 container create 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:44:12 compute-0 systemd[1]: Started libpod-conmon-1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2.scope.
Jan 21 13:44:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:13 compute-0 podman[74293]: 2026-01-21 13:44:12.911417933 +0000 UTC m=+0.020403039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:13 compute-0 podman[74293]: 2026-01-21 13:44:13.290500322 +0000 UTC m=+0.399485428 container init 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 13:44:13 compute-0 podman[74293]: 2026-01-21 13:44:13.297944531 +0000 UTC m=+0.406929647 container start 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:44:13 compute-0 zen_hypatia[74309]: AQAt2HBpuSE7ExAAoducgVoxob61DucUKgYZ3A==
Jan 21 13:44:13 compute-0 systemd[1]: libpod-1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2.scope: Deactivated successfully.
Jan 21 13:44:13 compute-0 podman[74293]: 2026-01-21 13:44:13.681823816 +0000 UTC m=+0.790808952 container attach 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:44:13 compute-0 podman[74293]: 2026-01-21 13:44:13.682673666 +0000 UTC m=+0.791658812 container died 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-980ba18bb5e689a5268a8fabe1a42bf36aedd8bea917199228d16a615af09c7a-merged.mount: Deactivated successfully.
Jan 21 13:44:16 compute-0 podman[74293]: 2026-01-21 13:44:16.485217241 +0000 UTC m=+3.594202337 container remove 1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2 (image=quay.io/ceph/ceph:v20, name=zen_hypatia, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:16 compute-0 systemd[1]: libpod-conmon-1de128dc89c40f313850b5dc948b565c2495effff4f24751a0c4e33e24faebe2.scope: Deactivated successfully.
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.550897881 +0000 UTC m=+0.043984252 container create 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:16 compute-0 systemd[1]: Started libpod-conmon-6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f.scope.
Jan 21 13:44:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d123dd3da2c8bfd14870549295b527d8b4af8a125814b67da258787a7d2d6a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.624742326 +0000 UTC m=+0.117828677 container init 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.529382957 +0000 UTC m=+0.022469308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.629475969 +0000 UTC m=+0.122562340 container start 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.634984911 +0000 UTC m=+0.128071292 container attach 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:44:16 compute-0 cool_shaw[74344]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 21 13:44:16 compute-0 cool_shaw[74344]: setting min_mon_release = tentacle
Jan 21 13:44:16 compute-0 cool_shaw[74344]: /usr/bin/monmaptool: set fsid to 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:16 compute-0 cool_shaw[74344]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 21 13:44:16 compute-0 systemd[1]: libpod-6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f.scope: Deactivated successfully.
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.669781053 +0000 UTC m=+0.162867444 container died 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:16 compute-0 podman[74328]: 2026-01-21 13:44:16.709956713 +0000 UTC m=+0.203043054 container remove 6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f (image=quay.io/ceph/ceph:v20, name=cool_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:16 compute-0 systemd[1]: libpod-conmon-6941593b93dbde1bec0a2f7ff2e255cf91c0f80517d1337137c168e519f1ac1f.scope: Deactivated successfully.
Jan 21 13:44:16 compute-0 podman[74362]: 2026-01-21 13:44:16.775207242 +0000 UTC m=+0.047073505 container create 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:44:16 compute-0 systemd[1]: Started libpod-conmon-7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6.scope.
Jan 21 13:44:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:16 compute-0 podman[74362]: 2026-01-21 13:44:16.74960157 +0000 UTC m=+0.021467923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236b6e161ec668905c34368baa586458c05abc49385adaf485a741f17d4b2b2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:16 compute-0 podman[74362]: 2026-01-21 13:44:16.861151377 +0000 UTC m=+0.133017690 container init 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:16 compute-0 podman[74362]: 2026-01-21 13:44:16.875254694 +0000 UTC m=+0.147120957 container start 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:44:16 compute-0 podman[74362]: 2026-01-21 13:44:16.879213978 +0000 UTC m=+0.151080281 container attach 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 13:44:16 compute-0 systemd[1]: libpod-7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6.scope: Deactivated successfully.
Jan 21 13:44:16 compute-0 podman[74362]: 2026-01-21 13:44:16.979992777 +0000 UTC m=+0.251859050 container died 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:44:17 compute-0 podman[74362]: 2026-01-21 13:44:17.012477384 +0000 UTC m=+0.284343647 container remove 7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6 (image=quay.io/ceph/ceph:v20, name=admiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:17 compute-0 systemd[1]: libpod-conmon-7bca285298e376516a1f5f7e103969b364e4a1bdff0118d27bfff9ee2f70f2c6.scope: Deactivated successfully.
Jan 21 13:44:17 compute-0 systemd[1]: Reloading.
Jan 21 13:44:17 compute-0 systemd-rc-local-generator[74445]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:17 compute-0 systemd-sysv-generator[74450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:17 compute-0 systemd[1]: Reloading.
Jan 21 13:44:17 compute-0 systemd-rc-local-generator[74478]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:17 compute-0 systemd-sysv-generator[74481]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:17 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 21 13:44:17 compute-0 systemd[1]: Reloading.
Jan 21 13:44:17 compute-0 systemd-rc-local-generator[74523]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:17 compute-0 systemd-sysv-generator[74526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:17 compute-0 systemd[1]: Reached target Ceph cluster 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:44:17 compute-0 systemd[1]: Reloading.
Jan 21 13:44:17 compute-0 systemd-rc-local-generator[74557]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:17 compute-0 systemd-sysv-generator[74562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:18 compute-0 systemd[1]: Reloading.
Jan 21 13:44:18 compute-0 systemd-sysv-generator[74599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:18 compute-0 systemd-rc-local-generator[74596]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:18 compute-0 systemd[1]: Created slice Slice /system/ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:44:18 compute-0 systemd[1]: Reached target System Time Set.
Jan 21 13:44:18 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 21 13:44:18 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:18 compute-0 podman[74656]: 2026-01-21 13:44:18.679920828 +0000 UTC m=+0.050850507 container create c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 podman[74656]: 2026-01-21 13:44:18.750801942 +0000 UTC m=+0.121731661 container init c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:44:18 compute-0 podman[74656]: 2026-01-21 13:44:18.656319814 +0000 UTC m=+0.027249563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:18 compute-0 podman[74656]: 2026-01-21 13:44:18.765196156 +0000 UTC m=+0.136125845 container start c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:18 compute-0 bash[74656]: c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6
Jan 21 13:44:18 compute-0 systemd[1]: Started Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:44:18 compute-0 ceph-mon[74675]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: pidfile_write: ignore empty --pid-file
Jan 21 13:44:18 compute-0 ceph-mon[74675]: load: jerasure load: lrc 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Git sha 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: DB SUMMARY
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: DB Session ID:  4UCG4RZ2N4ZX2X46OZSC
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                                     Options.env: 0x556ad76b5440
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                                Options.info_log: 0x556ad9d173e0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                                 Options.wal_dir: 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                    Options.write_buffer_manager: 0x556ad9c96140
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                               Options.row_cache: None
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                              Options.wal_filter: None
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.wal_compression: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.max_background_jobs: 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Compression algorithms supported:
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kZSTD supported: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:           Options.merge_operator: 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:        Options.compaction_filter: None
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556ad9ca2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556ad9c878d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.compression: NoCompression
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.num_levels: 7
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0890460c-1efa-4b98-b37d-c7b2c3489544
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003058818385, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003058820524, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "4UCG4RZ2N4ZX2X46OZSC", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003058820648, "job": 1, "event": "recovery_finished"}
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556ad9cb4e00
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: DB pointer 0x556ad9e00000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:44:18 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556ad9c878d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 13:44:18 compute-0 ceph-mon[74675]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@-1(???) e0 preinit fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 21 13:44:18 compute-0 ceph-mon[74675]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 13:44:18 compute-0 ceph-mon[74675]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 21 13:44:18 compute-0 podman[74676]: 2026-01-21 13:44:18.856192311 +0000 UTC m=+0.051396269 container create cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : created 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-21T13:44:16.926757Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).mds e1 new map
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-21T13:44:18:859596+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mkfs 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 13:44:18 compute-0 ceph-mon[74675]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 13:44:18 compute-0 ceph-mon[74675]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 13:44:18 compute-0 systemd[1]: Started libpod-conmon-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope.
Jan 21 13:44:18 compute-0 podman[74676]: 2026-01-21 13:44:18.831704056 +0000 UTC m=+0.026908034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e951e640ba0057ec35c48a4230c07ff41a1bb6e7af08d448dcd8b309f7f21b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e951e640ba0057ec35c48a4230c07ff41a1bb6e7af08d448dcd8b309f7f21b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e951e640ba0057ec35c48a4230c07ff41a1bb6e7af08d448dcd8b309f7f21b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:18 compute-0 podman[74676]: 2026-01-21 13:44:18.95488343 +0000 UTC m=+0.150087378 container init cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:44:18 compute-0 podman[74676]: 2026-01-21 13:44:18.967647715 +0000 UTC m=+0.162851703 container start cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 13:44:18 compute-0 podman[74676]: 2026-01-21 13:44:18.972461451 +0000 UTC m=+0.167665429 container attach cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432883261' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:   cluster:
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     id:     2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     health: HEALTH_OK
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:  
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:   services:
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     mon: 1 daemons, quorum compute-0 (age 0.320852s) [leader: compute-0]
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     mgr: no daemons active
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     osd: 0 osds: 0 up, 0 in
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:  
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:   data:
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     pools:   0 pools, 0 pgs
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     objects: 0 objects, 0 B
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     usage:   0 B used, 0 B / 0 B avail
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:     pgs:     
Jan 21 13:44:19 compute-0 admiring_blackwell[74728]:  
Jan 21 13:44:19 compute-0 systemd[1]: libpod-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope: Deactivated successfully.
Jan 21 13:44:19 compute-0 conmon[74728]: conmon cd18b6332c4a5c2a6982 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope/container/memory.events
Jan 21 13:44:19 compute-0 podman[74676]: 2026-01-21 13:44:19.195491121 +0000 UTC m=+0.390695069 container died cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 13:44:19 compute-0 podman[74676]: 2026-01-21 13:44:19.241742077 +0000 UTC m=+0.436946065 container remove cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99 (image=quay.io/ceph/ceph:v20, name=admiring_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:19 compute-0 systemd[1]: libpod-conmon-cd18b6332c4a5c2a6982250fd0f31df6384b742329de35d05f9b7b38c8a4eb99.scope: Deactivated successfully.
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.315824357 +0000 UTC m=+0.048311976 container create b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:44:19 compute-0 systemd[1]: Started libpod-conmon-b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a.scope.
Jan 21 13:44:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.383981236 +0000 UTC m=+0.116468875 container init b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.294337593 +0000 UTC m=+0.026825252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.393302869 +0000 UTC m=+0.125790488 container start b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.39708476 +0000 UTC m=+0.129572379 container attach b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 13:44:19 compute-0 ceph-mon[74675]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 13:44:19 compute-0 friendly_maxwell[74787]: 
Jan 21 13:44:19 compute-0 friendly_maxwell[74787]: [global]
Jan 21 13:44:19 compute-0 friendly_maxwell[74787]:         fsid = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:19 compute-0 friendly_maxwell[74787]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 13:44:19 compute-0 friendly_maxwell[74787]:         osd_crush_chooseleaf_type = 0
Jan 21 13:44:19 compute-0 systemd[1]: libpod-b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a.scope: Deactivated successfully.
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.610785977 +0000 UTC m=+0.343273636 container died b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:44:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a4b3622f9d4bba17cbcf2b06ac58fe662b9748b78a3cddc33c59c7caf77e90-merged.mount: Deactivated successfully.
Jan 21 13:44:19 compute-0 podman[74770]: 2026-01-21 13:44:19.656972691 +0000 UTC m=+0.389460310 container remove b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a (image=quay.io/ceph/ceph:v20, name=friendly_maxwell, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:44:19 compute-0 systemd[1]: libpod-conmon-b8ff68c0265683de00484a0c562e60880df87fd9b6caee3b2cdc9282c521575a.scope: Deactivated successfully.
Jan 21 13:44:19 compute-0 podman[74825]: 2026-01-21 13:44:19.734165457 +0000 UTC m=+0.048275486 container create fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:19 compute-0 systemd[1]: Started libpod-conmon-fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd.scope.
Jan 21 13:44:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:19 compute-0 podman[74825]: 2026-01-21 13:44:19.718464101 +0000 UTC m=+0.032574160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:19 compute-0 podman[74825]: 2026-01-21 13:44:19.815997812 +0000 UTC m=+0.130107951 container init fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:19 compute-0 podman[74825]: 2026-01-21 13:44:19.823195884 +0000 UTC m=+0.137305943 container start fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:19 compute-0 podman[74825]: 2026-01-21 13:44:19.827601819 +0000 UTC m=+0.141711948 container attach fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: monmap epoch 1
Jan 21 13:44:19 compute-0 ceph-mon[74675]: fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:19 compute-0 ceph-mon[74675]: last_changed 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:19 compute-0 ceph-mon[74675]: created 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:19 compute-0 ceph-mon[74675]: min_mon_release 20 (tentacle)
Jan 21 13:44:19 compute-0 ceph-mon[74675]: election_strategy: 1
Jan 21 13:44:19 compute-0 ceph-mon[74675]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 13:44:19 compute-0 ceph-mon[74675]: fsmap 
Jan 21 13:44:19 compute-0 ceph-mon[74675]: osdmap e1: 0 total, 0 up, 0 in
Jan 21 13:44:19 compute-0 ceph-mon[74675]: mgrmap e1: no daemons active
Jan 21 13:44:19 compute-0 ceph-mon[74675]: from='client.? 192.168.122.100:0/1432883261' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 21 13:44:19 compute-0 ceph-mon[74675]: from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 13:44:19 compute-0 ceph-mon[74675]: from='client.? 192.168.122.100:0/1859974711' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 13:44:20 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:44:20 compute-0 ceph-mon[74675]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589707557' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:44:20 compute-0 systemd[1]: libpod-fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd.scope: Deactivated successfully.
Jan 21 13:44:20 compute-0 podman[74825]: 2026-01-21 13:44:20.094851497 +0000 UTC m=+0.408961596 container died fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 13:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-11daf38d5772565f13eeb0d21f13f0860492994920c084c7d1bb14995cbd6b2e-merged.mount: Deactivated successfully.
Jan 21 13:44:20 compute-0 podman[74825]: 2026-01-21 13:44:20.145264372 +0000 UTC m=+0.459374411 container remove fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd (image=quay.io/ceph/ceph:v20, name=sad_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 13:44:20 compute-0 systemd[1]: libpod-conmon-fb0fd3fe4bda351e9da491f41a7d6c5d77be7908904047dc9ead671e488b4dbd.scope: Deactivated successfully.
Jan 21 13:44:20 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:44:20 compute-0 ceph-mon[74675]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 13:44:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0[74671]: 2026-01-21T13:44:20.408+0000 7fd1922c0640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 21 13:44:20 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 13:44:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0[74671]: 2026-01-21T13:44:20.408+0000 7fd1922c0640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 21 13:44:20 compute-0 ceph-mon[74675]: mon.compute-0@0(leader) e1 shutdown
Jan 21 13:44:20 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 13:44:20 compute-0 ceph-mon[74675]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 13:44:20 compute-0 podman[74911]: 2026-01-21 13:44:20.470349062 +0000 UTC m=+0.118694917 container died c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4390f1477dc327414503e549cbc2da0621e1b20c057c3c48dfe97b5f70783145-merged.mount: Deactivated successfully.
Jan 21 13:44:20 compute-0 podman[74911]: 2026-01-21 13:44:20.521339101 +0000 UTC m=+0.169684956 container remove c9ae1a27d7b44ae254b27e11a7d77da22af4050bf29acd9aca6a34cdd39f26a6 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:20 compute-0 bash[74911]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0
Jan 21 13:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 21 13:44:20 compute-0 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mon.compute-0.service: Deactivated successfully.
Jan 21 13:44:20 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:44:20 compute-0 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mon.compute-0.service: Consumed 1.066s CPU time.
Jan 21 13:44:20 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:44:20 compute-0 podman[75012]: 2026-01-21 13:44:20.906480287 +0000 UTC m=+0.035329936 container create cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529ae71ef349095f305fe3a4b591c8edb0eda6d58e36470657c076e838a2af68/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:20 compute-0 podman[75012]: 2026-01-21 13:44:20.970577169 +0000 UTC m=+0.099426838 container init cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 13:44:20 compute-0 podman[75012]: 2026-01-21 13:44:20.98110775 +0000 UTC m=+0.109957399 container start cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:20 compute-0 bash[75012]: cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649
Jan 21 13:44:20 compute-0 podman[75012]: 2026-01-21 13:44:20.890496484 +0000 UTC m=+0.019346153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:20 compute-0 systemd[1]: Started Ceph mon.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:44:21 compute-0 ceph-mon[75031]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: pidfile_write: ignore empty --pid-file
Jan 21 13:44:21 compute-0 ceph-mon[75031]: load: jerasure load: lrc 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Git sha 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: DB SUMMARY
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: DB Session ID:  MNCZ0UYV5GPEBH7LDUF1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                                     Options.env: 0x56223e97f440
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                                Options.info_log: 0x562240bb9e80
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                                 Options.wal_dir: 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                    Options.write_buffer_manager: 0x562240c04140
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                               Options.row_cache: None
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                              Options.wal_filter: None
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.wal_compression: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.max_background_jobs: 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.max_total_wal_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:       Options.compaction_readahead_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Compression algorithms supported:
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kZSTD supported: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:           Options.merge_operator: 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:        Options.compaction_filter: None
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562240c10a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562240bf58d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:        Options.write_buffer_size: 33554432
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:  Options.max_write_buffer_number: 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.compression: NoCompression
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.num_levels: 7
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0890460c-1efa-4b98-b37d-c7b2c3489544
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003061030345, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003061035911, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003061, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003061036143, "job": 1, "event": "recovery_finished"}
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562240c22e00
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: DB pointer 0x562240d6c000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:44:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.70 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.70 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 13:44:21 compute-0 ceph-mon[75031]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???) e1 preinit fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).mds e1 new map
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-21T13:44:18:859596+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 21 13:44:21 compute-0 ceph-mon[75031]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : last_changed 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : created 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.061665576 +0000 UTC m=+0.047154018 container create 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 21 13:44:21 compute-0 systemd[1]: Started libpod-conmon-7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c.scope.
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: monmap epoch 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:21 compute-0 ceph-mon[75031]: last_changed 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: created 2026-01-21T13:44:16.665097+0000
Jan 21 13:44:21 compute-0 ceph-mon[75031]: min_mon_release 20 (tentacle)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: election_strategy: 1
Jan 21 13:44:21 compute-0 ceph-mon[75031]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 21 13:44:21 compute-0 ceph-mon[75031]: fsmap 
Jan 21 13:44:21 compute-0 ceph-mon[75031]: osdmap e1: 0 total, 0 up, 0 in
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mgrmap e1: no daemons active
Jan 21 13:44:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.040634293 +0000 UTC m=+0.026122745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a76f679d587b444444f0c37b1b300000be6e65ea98d1b1d98a4544b0ed0479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a76f679d587b444444f0c37b1b300000be6e65ea98d1b1d98a4544b0ed0479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a76f679d587b444444f0c37b1b300000be6e65ea98d1b1d98a4544b0ed0479/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.161725727 +0000 UTC m=+0.147214159 container init 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.171981482 +0000 UTC m=+0.157469914 container start 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.175298762 +0000 UTC m=+0.160787194 container attach 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 21 13:44:21 compute-0 systemd[1]: libpod-7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c.scope: Deactivated successfully.
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.378808176 +0000 UTC m=+0.364296648 container died 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:21 compute-0 podman[75032]: 2026-01-21 13:44:21.424948169 +0000 UTC m=+0.410436611 container remove 7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c (image=quay.io/ceph/ceph:v20, name=objective_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:21 compute-0 systemd[1]: libpod-conmon-7d66f499a480fcc9a12d66f4fa14685d0b1945cf545eab0890f993a503583f7c.scope: Deactivated successfully.
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.497262637 +0000 UTC m=+0.045759975 container create 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 13:44:21 compute-0 systemd[1]: Started libpod-conmon-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope.
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.478488678 +0000 UTC m=+0.026986016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.601918909 +0000 UTC m=+0.150416277 container init 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.6078184 +0000 UTC m=+0.156315728 container start 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.61160251 +0000 UTC m=+0.160099878 container attach 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:44:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 21 13:44:21 compute-0 systemd[1]: libpod-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope: Deactivated successfully.
Jan 21 13:44:21 compute-0 conmon[75142]: conmon 3c5a08e0ec69921da58a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope/container/memory.events
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.839311253 +0000 UTC m=+0.387808571 container died 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-99f0e85db03366460f6c7dcebaf91587124a66e48f9b8fde2f5f7ce1530fae1f-merged.mount: Deactivated successfully.
Jan 21 13:44:21 compute-0 podman[75125]: 2026-01-21 13:44:21.884084093 +0000 UTC m=+0.432581411 container remove 3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7 (image=quay.io/ceph/ceph:v20, name=romantic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 13:44:21 compute-0 systemd[1]: libpod-conmon-3c5a08e0ec69921da58adf3aac903bd3bdc34e12810803f998ab7bb192f994c7.scope: Deactivated successfully.
Jan 21 13:44:21 compute-0 systemd[1]: Reloading.
Jan 21 13:44:22 compute-0 systemd-sysv-generator[75212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:22 compute-0 systemd-rc-local-generator[75207]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:22 compute-0 systemd[1]: Reloading.
Jan 21 13:44:22 compute-0 systemd-rc-local-generator[75248]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:22 compute-0 systemd-sysv-generator[75251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:22 compute-0 systemd[1]: Starting Ceph mgr.compute-0.tnwklj for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:44:22 compute-0 podman[75303]: 2026-01-21 13:44:22.715375862 +0000 UTC m=+0.043358507 container create e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da37902d286d7277952d091c9412e8cde529db529ba1b89f9112efa722ff9172/merged/var/lib/ceph/mgr/ceph-compute-0.tnwklj supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 podman[75303]: 2026-01-21 13:44:22.782492536 +0000 UTC m=+0.110475251 container init e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:22 compute-0 podman[75303]: 2026-01-21 13:44:22.693397596 +0000 UTC m=+0.021380271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:22 compute-0 podman[75303]: 2026-01-21 13:44:22.79185991 +0000 UTC m=+0.119842555 container start e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 13:44:22 compute-0 bash[75303]: e43620387faca5e1843acf5892e98f1ab1b495216bbbf44f0fb6cf55c32acc3c
Jan 21 13:44:22 compute-0 systemd[1]: Started Ceph mgr.compute-0.tnwklj for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:44:22 compute-0 ceph-mgr[75322]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:44:22 compute-0 ceph-mgr[75322]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 21 13:44:22 compute-0 ceph-mgr[75322]: pidfile_write: ignore empty --pid-file
Jan 21 13:44:22 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'alerts'
Jan 21 13:44:22 compute-0 podman[75323]: 2026-01-21 13:44:22.884588807 +0000 UTC m=+0.048528612 container create 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:44:22 compute-0 systemd[1]: Started libpod-conmon-24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c.scope.
Jan 21 13:44:22 compute-0 podman[75323]: 2026-01-21 13:44:22.863898362 +0000 UTC m=+0.027838167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:22 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'balancer'
Jan 21 13:44:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:22 compute-0 podman[75323]: 2026-01-21 13:44:22.987119038 +0000 UTC m=+0.151058843 container init 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 13:44:22 compute-0 podman[75323]: 2026-01-21 13:44:22.994296489 +0000 UTC m=+0.158236264 container start 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:22 compute-0 podman[75323]: 2026-01-21 13:44:22.998677743 +0000 UTC m=+0.162617548 container attach 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 21 13:44:23 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'cephadm'
Jan 21 13:44:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 13:44:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090944822' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]: 
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]: {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "health": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "status": "HEALTH_OK",
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "checks": {},
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "mutes": []
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "election_epoch": 5,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "quorum": [
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         0
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     ],
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "quorum_names": [
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "compute-0"
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     ],
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "quorum_age": 2,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "monmap": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "epoch": 1,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "min_mon_release_name": "tentacle",
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_mons": 1
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "osdmap": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "epoch": 1,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_osds": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_up_osds": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "osd_up_since": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_in_osds": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "osd_in_since": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_remapped_pgs": 0
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "pgmap": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "pgs_by_state": [],
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_pgs": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_pools": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_objects": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "data_bytes": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "bytes_used": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "bytes_avail": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "bytes_total": 0
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "fsmap": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "epoch": 1,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "by_rank": [],
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "up:standby": 0
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "mgrmap": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "available": false,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "num_standbys": 0,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "modules": [
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:             "iostat",
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:             "nfs"
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         ],
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "services": {}
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "servicemap": {
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "epoch": 1,
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:         "services": {}
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     },
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]:     "progress_events": {}
Jan 21 13:44:23 compute-0 dreamy_solomon[75360]: }
Jan 21 13:44:23 compute-0 systemd[1]: libpod-24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c.scope: Deactivated successfully.
Jan 21 13:44:23 compute-0 podman[75323]: 2026-01-21 13:44:23.226937369 +0000 UTC m=+0.390877144 container died 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-704a60328aec2aafdcc0a6fd3cad5d5e8dbf8d33b05360fd667a6fb3f1458c0c-merged.mount: Deactivated successfully.
Jan 21 13:44:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3090944822' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:23 compute-0 podman[75323]: 2026-01-21 13:44:23.271119886 +0000 UTC m=+0.435059661 container remove 24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c (image=quay.io/ceph/ceph:v20, name=dreamy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 13:44:23 compute-0 systemd[1]: libpod-conmon-24e655b9126c58bc7a8a673b4bc32732d6c8dd65c2743750814308dbe7f9383c.scope: Deactivated successfully.
Jan 21 13:44:23 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'crash'
Jan 21 13:44:23 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'dashboard'
Jan 21 13:44:24 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'devicehealth'
Jan 21 13:44:24 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 13:44:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 13:44:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 13:44:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]:   from numpy import show_config as show_numpy_config
Jan 21 13:44:24 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'influx'
Jan 21 13:44:24 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'insights'
Jan 21 13:44:24 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'iostat'
Jan 21 13:44:25 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'k8sevents'
Jan 21 13:44:25 compute-0 podman[75410]: 2026-01-21 13:44:25.34940161 +0000 UTC m=+0.051406030 container create 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:25 compute-0 systemd[1]: Started libpod-conmon-6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220.scope.
Jan 21 13:44:25 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'localpool'
Jan 21 13:44:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:25 compute-0 podman[75410]: 2026-01-21 13:44:25.326259256 +0000 UTC m=+0.028263786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:25 compute-0 podman[75410]: 2026-01-21 13:44:25.436867469 +0000 UTC m=+0.138871919 container init 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:25 compute-0 podman[75410]: 2026-01-21 13:44:25.444469742 +0000 UTC m=+0.146474192 container start 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:44:25 compute-0 podman[75410]: 2026-01-21 13:44:25.44901877 +0000 UTC m=+0.151023200 container attach 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:25 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 13:44:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 13:44:25 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033193989' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:25 compute-0 festive_chatelet[75426]: 
Jan 21 13:44:25 compute-0 festive_chatelet[75426]: {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "health": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "status": "HEALTH_OK",
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "checks": {},
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "mutes": []
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "election_epoch": 5,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "quorum": [
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         0
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     ],
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "quorum_names": [
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "compute-0"
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     ],
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "quorum_age": 4,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "monmap": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "epoch": 1,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "min_mon_release_name": "tentacle",
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_mons": 1
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "osdmap": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "epoch": 1,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_osds": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_up_osds": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "osd_up_since": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_in_osds": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "osd_in_since": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_remapped_pgs": 0
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "pgmap": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "pgs_by_state": [],
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_pgs": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_pools": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_objects": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "data_bytes": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "bytes_used": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "bytes_avail": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "bytes_total": 0
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "fsmap": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "epoch": 1,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "by_rank": [],
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "up:standby": 0
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "mgrmap": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "available": false,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "num_standbys": 0,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "modules": [
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:             "iostat",
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:             "nfs"
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         ],
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "services": {}
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "servicemap": {
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "epoch": 1,
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:         "services": {}
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     },
Jan 21 13:44:25 compute-0 festive_chatelet[75426]:     "progress_events": {}
Jan 21 13:44:25 compute-0 festive_chatelet[75426]: }
Jan 21 13:44:25 compute-0 systemd[1]: libpod-6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220.scope: Deactivated successfully.
Jan 21 13:44:25 compute-0 podman[75452]: 2026-01-21 13:44:25.667897582 +0000 UTC m=+0.023191675 container died 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:25 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3033193989' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cad89fb347d284ebf9f37716b3907d0c0465e2c39d4f384be10a0773bbcfa3c-merged.mount: Deactivated successfully.
Jan 21 13:44:25 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'mirroring'
Jan 21 13:44:25 compute-0 podman[75452]: 2026-01-21 13:44:25.704336274 +0000 UTC m=+0.059630347 container remove 6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220 (image=quay.io/ceph/ceph:v20, name=festive_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 13:44:25 compute-0 systemd[1]: libpod-conmon-6dc07185eee4276f3671f85bbb221927580935c11c132759c87a4b74339c0220.scope: Deactivated successfully.
Jan 21 13:44:25 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'nfs'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'orchestrator'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'osd_support'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'progress'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'prometheus'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'rbd_support'
Jan 21 13:44:26 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'rgw'
Jan 21 13:44:27 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'rook'
Jan 21 13:44:27 compute-0 podman[75467]: 2026-01-21 13:44:27.792859142 +0000 UTC m=+0.056484641 container create 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:27 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'selftest'
Jan 21 13:44:27 compute-0 systemd[1]: Started libpod-conmon-0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c.scope.
Jan 21 13:44:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:27 compute-0 podman[75467]: 2026-01-21 13:44:27.76723487 +0000 UTC m=+0.030860379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:27 compute-0 podman[75467]: 2026-01-21 13:44:27.862590559 +0000 UTC m=+0.126216058 container init 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:27 compute-0 podman[75467]: 2026-01-21 13:44:27.868459749 +0000 UTC m=+0.132085238 container start 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:44:27 compute-0 podman[75467]: 2026-01-21 13:44:27.87227733 +0000 UTC m=+0.135902819 container attach 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:27 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'smb'
Jan 21 13:44:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 13:44:28 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162610670' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]: 
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]: {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "health": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "status": "HEALTH_OK",
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "checks": {},
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "mutes": []
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "election_epoch": 5,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "quorum": [
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         0
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     ],
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "quorum_names": [
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "compute-0"
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     ],
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "quorum_age": 6,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "monmap": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "epoch": 1,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "min_mon_release_name": "tentacle",
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_mons": 1
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "osdmap": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "epoch": 1,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_osds": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_up_osds": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "osd_up_since": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_in_osds": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "osd_in_since": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_remapped_pgs": 0
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "pgmap": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "pgs_by_state": [],
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_pgs": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_pools": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_objects": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "data_bytes": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "bytes_used": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "bytes_avail": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "bytes_total": 0
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "fsmap": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "epoch": 1,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "by_rank": [],
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "up:standby": 0
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "mgrmap": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "available": false,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "num_standbys": 0,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "modules": [
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:             "iostat",
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:             "nfs"
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         ],
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "services": {}
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "servicemap": {
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "epoch": 1,
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:         "services": {}
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     },
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]:     "progress_events": {}
Jan 21 13:44:28 compute-0 vigilant_satoshi[75483]: }
Jan 21 13:44:28 compute-0 systemd[1]: libpod-0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c.scope: Deactivated successfully.
Jan 21 13:44:28 compute-0 podman[75467]: 2026-01-21 13:44:28.074872553 +0000 UTC m=+0.338498102 container died 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be97d012b72548c1e462a0611ebc46d13d6b413acb3457f335623410e454a20-merged.mount: Deactivated successfully.
Jan 21 13:44:28 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3162610670' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:28 compute-0 podman[75467]: 2026-01-21 13:44:28.129898218 +0000 UTC m=+0.393523717 container remove 0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c (image=quay.io/ceph/ceph:v20, name=vigilant_satoshi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:28 compute-0 systemd[1]: libpod-conmon-0b4c063c8339dbf28dc47163ffca99b229054a6c42147b99fdfd15a1df3e015c.scope: Deactivated successfully.
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'snap_schedule'
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'stats'
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'status'
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'telegraf'
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'telemetry'
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 13:44:28 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'volumes'
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: ms_deliver_dispatch: unhandled message 0x55ec554ff860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.tnwklj
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr handle_mgr_map Activating!
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr handle_mgr_map I am now activating
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.tnwklj(active, starting, since 0.0158873s)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: balancer
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: crash
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Manager daemon compute-0.tnwklj is now available
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [balancer INFO root] Starting
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: devicehealth
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Starting
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:44:29
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [balancer INFO root] No pools available
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: iostat
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: nfs
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: orchestrator
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: pg_autoscaler
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: progress
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [progress INFO root] Loading...
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [progress INFO root] No stored events to load
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [progress INFO root] Loaded [] historic events
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] recovery thread starting
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] starting setup
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: rbd_support
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: status
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: telemetry
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] PerfHandler: starting
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TaskHandler: starting
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: [rbd_support INFO root] setup complete
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: Activating manager daemon compute-0.tnwklj
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mgrmap e2: compute-0.tnwklj(active, starting, since 0.0158873s)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: Manager daemon compute-0.tnwklj is now available
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 13:44:29 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: volumes
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 21 13:44:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:30 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.tnwklj(active, since 1.03423s)
Jan 21 13:44:30 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:30 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:30 compute-0 ceph-mon[75031]: from='mgr.14102 192.168.122.100:0/3682993501' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:30 compute-0 ceph-mon[75031]: mgrmap e3: compute-0.tnwklj(active, since 1.03423s)
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.217098266 +0000 UTC m=+0.055860546 container create 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 13:44:30 compute-0 systemd[1]: Started libpod-conmon-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope.
Jan 21 13:44:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.198312236 +0000 UTC m=+0.037074476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.31264764 +0000 UTC m=+0.151409900 container init 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.319178486 +0000 UTC m=+0.157940726 container start 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.324077263 +0000 UTC m=+0.162839533 container attach 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 21 13:44:30 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340416511' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]: 
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]: {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "health": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "status": "HEALTH_OK",
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "checks": {},
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "mutes": []
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "election_epoch": 5,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "quorum": [
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         0
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     ],
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "quorum_names": [
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "compute-0"
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     ],
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "quorum_age": 9,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "monmap": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "epoch": 1,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "min_mon_release_name": "tentacle",
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_mons": 1
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "osdmap": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "epoch": 1,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_osds": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_up_osds": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "osd_up_since": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_in_osds": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "osd_in_since": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_remapped_pgs": 0
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "pgmap": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "pgs_by_state": [],
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_pgs": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_pools": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_objects": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "data_bytes": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "bytes_used": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "bytes_avail": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "bytes_total": 0
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "fsmap": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "epoch": 1,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "btime": "2026-01-21T13:44:18:859596+0000",
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "by_rank": [],
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "up:standby": 0
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "mgrmap": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "available": true,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "num_standbys": 0,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "modules": [
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:             "iostat",
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:             "nfs"
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         ],
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "services": {}
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "servicemap": {
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "epoch": 1,
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "modified": "2026-01-21T13:44:18.861719+0000",
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:         "services": {}
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     },
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]:     "progress_events": {}
Jan 21 13:44:30 compute-0 affectionate_chaum[75614]: }
Jan 21 13:44:30 compute-0 systemd[1]: libpod-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope: Deactivated successfully.
Jan 21 13:44:30 compute-0 conmon[75614]: conmon 42747becb9a5f7cbb6c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope/container/memory.events
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.938821186 +0000 UTC m=+0.777583466 container died 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-81662b206783b6855cf10aca4034cbd730bbae8aa32c4bed910ea1716a0a4b48-merged.mount: Deactivated successfully.
Jan 21 13:44:30 compute-0 podman[75598]: 2026-01-21 13:44:30.985961363 +0000 UTC m=+0.824723633 container remove 42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7 (image=quay.io/ceph/ceph:v20, name=affectionate_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:44:30 compute-0 systemd[1]: libpod-conmon-42747becb9a5f7cbb6c7b18bfa1b1e93345b306d1649f10034462b16251bd3e7.scope: Deactivated successfully.
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.058457566 +0000 UTC m=+0.046782200 container create ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:44:31 compute-0 systemd[1]: Started libpod-conmon-ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788.scope.
Jan 21 13:44:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.036195784 +0000 UTC m=+0.024520418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.145804293 +0000 UTC m=+0.134128927 container init ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 13:44:31 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.155796552 +0000 UTC m=+0.144121156 container start ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.159677895 +0000 UTC m=+0.148002499 container attach ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:31 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:31 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.tnwklj(active, since 2s)
Jan 21 13:44:31 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1340416511' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 21 13:44:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 13:44:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/926465230' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 13:44:31 compute-0 affectionate_mcnulty[75668]: 
Jan 21 13:44:31 compute-0 affectionate_mcnulty[75668]: [global]
Jan 21 13:44:31 compute-0 affectionate_mcnulty[75668]:         fsid = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:31 compute-0 affectionate_mcnulty[75668]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 21 13:44:31 compute-0 affectionate_mcnulty[75668]:         osd_crush_chooseleaf_type = 0
Jan 21 13:44:31 compute-0 systemd[1]: libpod-ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788.scope: Deactivated successfully.
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.644523403 +0000 UTC m=+0.632848017 container died ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c79fe2db340ffb494ceda6f8ada31407dfc3bae41dfd01d2efa2a4228711371e-merged.mount: Deactivated successfully.
Jan 21 13:44:31 compute-0 podman[75652]: 2026-01-21 13:44:31.682641554 +0000 UTC m=+0.670966148 container remove ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788 (image=quay.io/ceph/ceph:v20, name=affectionate_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:31 compute-0 systemd[1]: libpod-conmon-ef4447eade18f8276c2f0ac084faf3c1e2d4b0fcc94220157c0038d739816788.scope: Deactivated successfully.
Jan 21 13:44:31 compute-0 podman[75705]: 2026-01-21 13:44:31.762163115 +0000 UTC m=+0.050622101 container create 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:31 compute-0 systemd[1]: Started libpod-conmon-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope.
Jan 21 13:44:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:31 compute-0 podman[75705]: 2026-01-21 13:44:31.735324954 +0000 UTC m=+0.023783960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:31 compute-0 podman[75705]: 2026-01-21 13:44:31.836150774 +0000 UTC m=+0.124609770 container init 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:44:31 compute-0 podman[75705]: 2026-01-21 13:44:31.850113707 +0000 UTC m=+0.138572723 container start 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 13:44:31 compute-0 podman[75705]: 2026-01-21 13:44:31.854125113 +0000 UTC m=+0.142584119 container attach 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:44:32 compute-0 ceph-mon[75031]: mgrmap e4: compute-0.tnwklj(active, since 2s)
Jan 21 13:44:32 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/926465230' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 13:44:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 21 13:44:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:33 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 21 13:44:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 13:44:33 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.tnwklj(active, since 4s)
Jan 21 13:44:33 compute-0 systemd[1]: libpod-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope: Deactivated successfully.
Jan 21 13:44:33 compute-0 conmon[75722]: conmon 8d4ffffe4e948b803d0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope/container/memory.events
Jan 21 13:44:33 compute-0 podman[75748]: 2026-01-21 13:44:33.33506587 +0000 UTC m=+0.026387231 container died 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: ignoring --setuser ceph since I am not root
Jan 21 13:44:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: ignoring --setgroup ceph since I am not root
Jan 21 13:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e9019d3d50c8722baa4d6272f282342ab6401abcb7fe806e8007be463f284e0-merged.mount: Deactivated successfully.
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: pidfile_write: ignore empty --pid-file
Jan 21 13:44:33 compute-0 podman[75748]: 2026-01-21 13:44:33.376849209 +0000 UTC m=+0.068170550 container remove 8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940 (image=quay.io/ceph/ceph:v20, name=practical_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 13:44:33 compute-0 systemd[1]: libpod-conmon-8d4ffffe4e948b803d0bd88a9aea019fbf4155b135be4bbb5b3ec395d4790940.scope: Deactivated successfully.
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'alerts'
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'balancer'
Jan 21 13:44:33 compute-0 podman[75782]: 2026-01-21 13:44:33.42082081 +0000 UTC m=+0.020681105 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:33 compute-0 podman[75782]: 2026-01-21 13:44:33.533679587 +0000 UTC m=+0.133539882 container create 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:33 compute-0 systemd[1]: Started libpod-conmon-4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9.scope.
Jan 21 13:44:33 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'cephadm'
Jan 21 13:44:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:33 compute-0 podman[75782]: 2026-01-21 13:44:33.649871184 +0000 UTC m=+0.249731479 container init 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:33 compute-0 podman[75782]: 2026-01-21 13:44:33.6559536 +0000 UTC m=+0.255813885 container start 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:33 compute-0 podman[75782]: 2026-01-21 13:44:33.659514065 +0000 UTC m=+0.259374360 container attach 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:44:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 21 13:44:34 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088586261' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 21 13:44:34 compute-0 hungry_hamilton[75799]: {
Jan 21 13:44:34 compute-0 hungry_hamilton[75799]:     "epoch": 5,
Jan 21 13:44:34 compute-0 hungry_hamilton[75799]:     "available": true,
Jan 21 13:44:34 compute-0 hungry_hamilton[75799]:     "active_name": "compute-0.tnwklj",
Jan 21 13:44:34 compute-0 hungry_hamilton[75799]:     "num_standby": 0
Jan 21 13:44:34 compute-0 hungry_hamilton[75799]: }
Jan 21 13:44:34 compute-0 systemd[1]: libpod-4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9.scope: Deactivated successfully.
Jan 21 13:44:34 compute-0 podman[75782]: 2026-01-21 13:44:34.181290077 +0000 UTC m=+0.781150372 container died 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6b1cff38de1756ec5fd604e4eaaffc7ed745825c3dfe883a41687d1e95ac701-merged.mount: Deactivated successfully.
Jan 21 13:44:34 compute-0 podman[75782]: 2026-01-21 13:44:34.217735128 +0000 UTC m=+0.817595413 container remove 4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9 (image=quay.io/ceph/ceph:v20, name=hungry_hamilton, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:34 compute-0 systemd[1]: libpod-conmon-4d963f5e2e6d6a41124c8607174e254d551de4c6e98bdba311e4ebf182a003a9.scope: Deactivated successfully.
Jan 21 13:44:34 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2844574049' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 21 13:44:34 compute-0 ceph-mon[75031]: mgrmap e5: compute-0.tnwklj(active, since 4s)
Jan 21 13:44:34 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3088586261' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 21 13:44:34 compute-0 podman[75848]: 2026-01-21 13:44:34.286860589 +0000 UTC m=+0.048368186 container create 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 21 13:44:34 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'crash'
Jan 21 13:44:34 compute-0 systemd[1]: Started libpod-conmon-4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de.scope.
Jan 21 13:44:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:34 compute-0 podman[75848]: 2026-01-21 13:44:34.352346955 +0000 UTC m=+0.113854552 container init 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:34 compute-0 podman[75848]: 2026-01-21 13:44:34.268574893 +0000 UTC m=+0.030082520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:34 compute-0 podman[75848]: 2026-01-21 13:44:34.358506142 +0000 UTC m=+0.120013739 container start 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:34 compute-0 podman[75848]: 2026-01-21 13:44:34.366102044 +0000 UTC m=+0.127609681 container attach 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:44:34 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'dashboard'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'devicehealth'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 13:44:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 13:44:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 13:44:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]:   from numpy import show_config as show_numpy_config
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'influx'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'insights'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'iostat'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'k8sevents'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'localpool'
Jan 21 13:44:35 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'mirroring'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'nfs'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'orchestrator'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'osd_support'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'progress'
Jan 21 13:44:36 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'prometheus'
Jan 21 13:44:37 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'rbd_support'
Jan 21 13:44:37 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'rgw'
Jan 21 13:44:37 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'rook'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'selftest'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'smb'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'snap_schedule'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'stats'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'status'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'telegraf'
Jan 21 13:44:38 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'telemetry'
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: mgr[py] Loading python module 'volumes'
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Active manager daemon compute-0.tnwklj restarted
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.tnwklj
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: ms_deliver_dispatch: unhandled message 0x55ddfbe18000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: mgr handle_mgr_map Activating!
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: mgr handle_mgr_map I am now activating
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.tnwklj(active, starting, since 0.0142623s)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} v 0)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e1 all = 1
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: balancer
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Manager daemon compute-0.tnwklj is now available
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Starting
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:44:39
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:44:39 compute-0 ceph-mgr[75322]: [balancer INFO root] No pools available
Jan 21 13:44:39 compute-0 ceph-mon[75031]: Active manager daemon compute-0.tnwklj restarted
Jan 21 13:44:39 compute-0 ceph-mon[75031]: Activating manager daemon compute-0.tnwklj
Jan 21 13:44:39 compute-0 ceph-mon[75031]: osdmap e2: 0 total, 0 up, 0 in
Jan 21 13:44:39 compute-0 ceph-mon[75031]: mgrmap e6: compute-0.tnwklj(active, starting, since 0.0142623s)
Jan 21 13:44:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.tnwklj", "id": "compute-0.tnwklj"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata"} : dispatch
Jan 21 13:44:39 compute-0 ceph-mon[75031]: Manager daemon compute-0.tnwklj is now available
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.tnwklj(active, since 1.41169s)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 21 13:44:40 compute-0 happy_bell[75863]: {
Jan 21 13:44:40 compute-0 happy_bell[75863]:     "mgrmap_epoch": 7,
Jan 21 13:44:40 compute-0 happy_bell[75863]:     "initialized": true
Jan 21 13:44:40 compute-0 happy_bell[75863]: }
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 21 13:44:40 compute-0 systemd[1]: libpod-4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de.scope: Deactivated successfully.
Jan 21 13:44:40 compute-0 podman[75848]: 2026-01-21 13:44:40.933329619 +0000 UTC m=+6.694837276 container died 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: cephadm
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: crash
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: devicehealth
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: iostat
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Starting
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: nfs
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: orchestrator
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: pg_autoscaler
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: progress
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [progress INFO root] Loading...
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [progress INFO root] No stored events to load
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [progress INFO root] Loaded [] historic events
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [progress INFO root] Loaded OSDMap, ready.
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9d745ee400c0247c5d80a711334d76fae5630d6026c6cf06a34c8e7ce8e4af-merged.mount: Deactivated successfully.
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] recovery thread starting
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] starting setup
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: rbd_support
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: status
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} v 0)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: telemetry
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] PerfHandler: starting
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TaskHandler: starting
Jan 21 13:44:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} v 0)
Jan 21 13:44:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] setup complete
Jan 21 13:44:40 compute-0 podman[75848]: 2026-01-21 13:44:40.981905173 +0000 UTC m=+6.743412770 container remove 4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de (image=quay.io/ceph/ceph:v20, name=happy_bell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:40 compute-0 ceph-mgr[75322]: mgr load Constructed class from module: volumes
Jan 21 13:44:40 compute-0 systemd[1]: libpod-conmon-4302cb9cfcbc582ecf60e8791fbab8d98d41669c04bf981c1560bc074e4016de.scope: Deactivated successfully.
Jan 21 13:44:41 compute-0 podman[76007]: 2026-01-21 13:44:41.047043274 +0000 UTC m=+0.042960265 container create f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019911966 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:44:41 compute-0 systemd[1]: Started libpod-conmon-f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859.scope.
Jan 21 13:44:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:41 compute-0 podman[76007]: 2026-01-21 13:44:41.02718946 +0000 UTC m=+0.023106481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:41 compute-0 podman[76007]: 2026-01-21 13:44:41.134416701 +0000 UTC m=+0.130333712 container init f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:41 compute-0 podman[76007]: 2026-01-21 13:44:41.144415214 +0000 UTC m=+0.140332205 container start f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:41 compute-0 podman[76007]: 2026-01-21 13:44:41.151218301 +0000 UTC m=+0.147135332 container attach f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:41 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 21 13:44:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 21 13:44:41 compute-0 ceph-mon[75031]: mgrmap e7: compute-0.tnwklj(active, since 1.41169s)
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:41 compute-0 ceph-mon[75031]: Found migration_current of "None". Setting to last migration.
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/mirror_snapshot_schedule"} : dispatch
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.tnwklj/trash_purge_schedule"} : dispatch
Jan 21 13:44:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 21 13:44:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 21 13:44:41 compute-0 cool_herschel[76024]: module 'orchestrator' is already enabled (always-on)
Jan 21 13:44:41 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.tnwklj(active, since 2s)
Jan 21 13:44:41 compute-0 systemd[1]: libpod-f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859.scope: Deactivated successfully.
Jan 21 13:44:42 compute-0 podman[76050]: 2026-01-21 13:44:42.029724334 +0000 UTC m=+0.038818264 container died f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0baeee0fb04c6fa7093a7a794a1448e998a3a8dd5af7847e68df82008701846b-merged.mount: Deactivated successfully.
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Bus STARTING
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Bus STARTING
Jan 21 13:44:42 compute-0 podman[76050]: 2026-01-21 13:44:42.076080646 +0000 UTC m=+0.085174566 container remove f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859 (image=quay.io/ceph/ceph:v20, name=cool_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:44:42 compute-0 systemd[1]: libpod-conmon-f12293d6aa31da75af7fa4f6a29354b5a7c577e5cc6c2490432ed912f19e2859.scope: Deactivated successfully.
Jan 21 13:44:42 compute-0 podman[76076]: 2026-01-21 13:44:42.166618419 +0000 UTC m=+0.053697027 container create 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Serving on https://192.168.122.100:7150
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Serving on https://192.168.122.100:7150
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Client ('192.168.122.100', 33734) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Client ('192.168.122.100', 33734) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 13:44:42 compute-0 systemd[1]: Started libpod-conmon-9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000.scope.
Jan 21 13:44:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:42 compute-0 podman[76076]: 2026-01-21 13:44:42.139726915 +0000 UTC m=+0.026805613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Serving on http://192.168.122.100:8765
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Serving on http://192.168.122.100:8765
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: [cephadm INFO cherrypy.error] [21/Jan/2026:13:44:42] ENGINE Bus STARTED
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : [21/Jan/2026:13:44:42] ENGINE Bus STARTED
Jan 21 13:44:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 13:44:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:42 compute-0 podman[76076]: 2026-01-21 13:44:42.289596035 +0000 UTC m=+0.176674663 container init 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:42 compute-0 podman[76076]: 2026-01-21 13:44:42.295266486 +0000 UTC m=+0.182345114 container start 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:42 compute-0 podman[76076]: 2026-01-21 13:44:42.300060904 +0000 UTC m=+0.187139542 container attach 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 21 13:44:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 13:44:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:42 compute-0 systemd[1]: libpod-9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000.scope: Deactivated successfully.
Jan 21 13:44:42 compute-0 podman[76076]: 2026-01-21 13:44:42.849784044 +0000 UTC m=+0.736862732 container died 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:44:42 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1952398831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 21 13:44:43 compute-0 ceph-mon[75031]: mgrmap e8: compute-0.tnwklj(active, since 2s)
Jan 21 13:44:43 compute-0 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Bus STARTING
Jan 21 13:44:43 compute-0 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Serving on https://192.168.122.100:7150
Jan 21 13:44:43 compute-0 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Client ('192.168.122.100', 33734) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 21 13:44:43 compute-0 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Serving on http://192.168.122.100:8765
Jan 21 13:44:43 compute-0 ceph-mon[75031]: [21/Jan/2026:13:44:42] ENGINE Bus STARTED
Jan 21 13:44:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d437d9abe57daf62bc3014d5486546b11bf12347fca764435d939b63d1533c2b-merged.mount: Deactivated successfully.
Jan 21 13:44:43 compute-0 podman[76076]: 2026-01-21 13:44:43.123904968 +0000 UTC m=+1.010983576 container remove 9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000 (image=quay.io/ceph/ceph:v20, name=eager_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.192025561 +0000 UTC m=+0.050196688 container create e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:43 compute-0 systemd[1]: Started libpod-conmon-e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d.scope.
Jan 21 13:44:43 compute-0 systemd[1]: libpod-conmon-9f3cb12c6558ab34a02eb9bc7072ea94263a3e3849bba0a0f69e7e1538dde000.scope: Deactivated successfully.
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.165735935 +0000 UTC m=+0.023906972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.288175363 +0000 UTC m=+0.146346400 container init e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.292721358 +0000 UTC m=+0.150892355 container start e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.297144782 +0000 UTC m=+0.155315859 container attach e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 21 13:44:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_user
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 21 13:44:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 21 13:44:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_config
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 21 13:44:43 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 21 13:44:43 compute-0 eager_hermann[76161]: ssh user set to ceph-admin. sudo will be used
Jan 21 13:44:43 compute-0 systemd[1]: libpod-e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d.scope: Deactivated successfully.
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.79918804 +0000 UTC m=+0.657359067 container died e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 21 13:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-507ce1ae8cc3495db02181b62d3254f4a324a22b7e3ad4cd1e9011769b9c6d27-merged.mount: Deactivated successfully.
Jan 21 13:44:43 compute-0 podman[76144]: 2026-01-21 13:44:43.85449304 +0000 UTC m=+0.712664037 container remove e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d (image=quay.io/ceph/ceph:v20, name=eager_hermann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:44:43 compute-0 systemd[1]: libpod-conmon-e245edf392cd56a373c1ab15f89825325160d1a51adcb51ad71919a11a366b8d.scope: Deactivated successfully.
Jan 21 13:44:43 compute-0 podman[76200]: 2026-01-21 13:44:43.929885236 +0000 UTC m=+0.048868988 container create 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:43 compute-0 systemd[1]: Started libpod-conmon-6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c.scope.
Jan 21 13:44:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:43 compute-0 podman[76200]: 2026-01-21 13:44:43.989270994 +0000 UTC m=+0.108254846 container init 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 13:44:43 compute-0 podman[76200]: 2026-01-21 13:44:43.996472597 +0000 UTC m=+0.115456399 container start 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 13:44:44 compute-0 podman[76200]: 2026-01-21 13:44:44.000956741 +0000 UTC m=+0.119940533 container attach 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 13:44:44 compute-0 podman[76200]: 2026-01-21 13:44:43.909282232 +0000 UTC m=+0.028266084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:44 compute-0 ceph-mon[75031]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:44 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 21 13:44:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:44 compute-0 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 21 13:44:44 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 21 13:44:44 compute-0 ceph-mgr[75322]: [cephadm INFO root] Set ssh private key
Jan 21 13:44:44 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 21 13:44:44 compute-0 systemd[1]: libpod-6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c.scope: Deactivated successfully.
Jan 21 13:44:44 compute-0 podman[76200]: 2026-01-21 13:44:44.422805545 +0000 UTC m=+0.541789337 container died 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a9e3ee2054a9806511c45eae173dbc06fc8f0e9c8b5b78ab5838923a5be85ae-merged.mount: Deactivated successfully.
Jan 21 13:44:44 compute-0 podman[76200]: 2026-01-21 13:44:44.456760559 +0000 UTC m=+0.575744321 container remove 6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c (image=quay.io/ceph/ceph:v20, name=jolly_galileo, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 13:44:44 compute-0 systemd[1]: libpod-conmon-6e1eeb143373480cfa7ec5da9ccf5cf88253b64c24e25dcdb0d5e792d132969c.scope: Deactivated successfully.
Jan 21 13:44:44 compute-0 podman[76254]: 2026-01-21 13:44:44.527338596 +0000 UTC m=+0.044595987 container create 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 13:44:44 compute-0 systemd[1]: Started libpod-conmon-98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856.scope.
Jan 21 13:44:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:44 compute-0 podman[76254]: 2026-01-21 13:44:44.508066071 +0000 UTC m=+0.025323482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:44 compute-0 podman[76254]: 2026-01-21 13:44:44.613524117 +0000 UTC m=+0.130781518 container init 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:44:44 compute-0 podman[76254]: 2026-01-21 13:44:44.618601089 +0000 UTC m=+0.135858520 container start 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 13:44:44 compute-0 podman[76254]: 2026-01-21 13:44:44.623204146 +0000 UTC m=+0.140461547 container attach 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:44:44 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 21 13:44:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:45 compute-0 ceph-mgr[75322]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 21 13:44:45 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 21 13:44:45 compute-0 systemd[1]: libpod-98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856.scope: Deactivated successfully.
Jan 21 13:44:45 compute-0 podman[76296]: 2026-01-21 13:44:45.076384046 +0000 UTC m=+0.021541498 container died 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:45 compute-0 ceph-mon[75031]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:45 compute-0 ceph-mon[75031]: Set ssh ssh_user
Jan 21 13:44:45 compute-0 ceph-mon[75031]: Set ssh ssh_config
Jan 21 13:44:45 compute-0 ceph-mon[75031]: ssh user set to ceph-admin. sudo will be used
Jan 21 13:44:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db99edd3544c53a233627785a890f0c86d2d5eb7c275c1980d86fa19704cc6b-merged.mount: Deactivated successfully.
Jan 21 13:44:45 compute-0 podman[76296]: 2026-01-21 13:44:45.118196873 +0000 UTC m=+0.063354355 container remove 98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856 (image=quay.io/ceph/ceph:v20, name=condescending_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:45 compute-0 systemd[1]: libpod-conmon-98c51a6966e7b1c2075bea61a80c6d5eb400826c1cce6e0794728ae1da82b856.scope: Deactivated successfully.
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.192731177 +0000 UTC m=+0.047426487 container create bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:45 compute-0 systemd[1]: Started libpod-conmon-bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5.scope.
Jan 21 13:44:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.170635582 +0000 UTC m=+0.025330922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.299400271 +0000 UTC m=+0.154095561 container init bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.306639474 +0000 UTC m=+0.161334764 container start bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.316532815 +0000 UTC m=+0.171228105 container attach bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:45 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:45 compute-0 keen_khorana[76327]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNBJgNZdVoji2xTK2lG5ndIv4X1xEtVZQs4dDQzvfbLwLUVgdqlevKS268jXG1mUgr1C08P9beV70uqaAgOh5fMJOEdYLN/c3D27OCYQnDCyuallCCjQuYs4OcgMQr2aW5xgo7ckrqlSxO4dE/QDi3vpX8q9rntqqDpTTf9oXuzezXWwYnuE9qPIM8yh2VLNxAACRy/jp77AuRd2OsUbQeMawhyHhZy1RBhvvuzTs2CRz1mpVUWiJ+TDrCFNv/LBFfvLhfba/YCmrJu14C/N9eMIEfsgJSJjKcQrLBJF4SrKaJhvea306fBEkZwFfRqi9CKPnptokktbQ7QxzeoYQEgrA1FaG+69xASHcb9mmslk8zmpJMCDLxjNXzXwwr6mnC1l35x8Bh6kyePvmHcXpj07zryJ/AyHwRVyjeQa0Lzz2S2G5CxUn2EQWF0LgrlVjG1BM5hdOjgdDNrDVYUOb9Hooq7BAxqqYD8gWVnsT0QPbitHxPQQglR+6C51Db3d0= zuul@controller
Jan 21 13:44:45 compute-0 systemd[1]: libpod-bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5.scope: Deactivated successfully.
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.724808234 +0000 UTC m=+0.579503554 container died bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1922efcd4f4991e0b6fa0288e3d7ef7a7b55b2c8fc0aede75fb03a2d1646a54c-merged.mount: Deactivated successfully.
Jan 21 13:44:45 compute-0 podman[76311]: 2026-01-21 13:44:45.779086449 +0000 UTC m=+0.633781739 container remove bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5 (image=quay.io/ceph/ceph:v20, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:45 compute-0 systemd[1]: libpod-conmon-bf35575838eff2acefe1dcfa50ce5dac0361954657224aae1f4ca613064b2af5.scope: Deactivated successfully.
Jan 21 13:44:45 compute-0 podman[76367]: 2026-01-21 13:44:45.8407315 +0000 UTC m=+0.043762596 container create 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:44:45 compute-0 systemd[1]: Started libpod-conmon-1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d.scope.
Jan 21 13:44:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:45 compute-0 podman[76367]: 2026-01-21 13:44:45.910497326 +0000 UTC m=+0.113528392 container init 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:44:45 compute-0 podman[76367]: 2026-01-21 13:44:45.818221438 +0000 UTC m=+0.021252494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:45 compute-0 podman[76367]: 2026-01-21 13:44:45.915909223 +0000 UTC m=+0.118940289 container start 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:45 compute-0 podman[76367]: 2026-01-21 13:44:45.919407873 +0000 UTC m=+0.122438949 container attach 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:44:46 compute-0 ceph-mon[75031]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:46 compute-0 ceph-mon[75031]: Set ssh ssh_identity_key
Jan 21 13:44:46 compute-0 ceph-mon[75031]: Set ssh private key
Jan 21 13:44:46 compute-0 ceph-mon[75031]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:46 compute-0 ceph-mon[75031]: Set ssh ssh_identity_pub
Jan 21 13:44:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:46 compute-0 sshd-session[76409]: Accepted publickey for ceph-admin from 192.168.122.100 port 55846 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:46 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 21 13:44:46 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 21 13:44:46 compute-0 systemd-logind[780]: New session 21 of user ceph-admin.
Jan 21 13:44:46 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 21 13:44:46 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 21 13:44:46 compute-0 systemd[76413]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:46 compute-0 systemd[76413]: Queued start job for default target Main User Target.
Jan 21 13:44:46 compute-0 systemd[76413]: Created slice User Application Slice.
Jan 21 13:44:46 compute-0 systemd[76413]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 21 13:44:46 compute-0 systemd[76413]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 13:44:46 compute-0 systemd[76413]: Reached target Paths.
Jan 21 13:44:46 compute-0 systemd[76413]: Reached target Timers.
Jan 21 13:44:46 compute-0 systemd[76413]: Starting D-Bus User Message Bus Socket...
Jan 21 13:44:46 compute-0 sshd-session[76426]: Accepted publickey for ceph-admin from 192.168.122.100 port 55862 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:46 compute-0 systemd[76413]: Starting Create User's Volatile Files and Directories...
Jan 21 13:44:46 compute-0 systemd-logind[780]: New session 23 of user ceph-admin.
Jan 21 13:44:46 compute-0 systemd[76413]: Listening on D-Bus User Message Bus Socket.
Jan 21 13:44:46 compute-0 systemd[76413]: Finished Create User's Volatile Files and Directories.
Jan 21 13:44:46 compute-0 systemd[76413]: Reached target Sockets.
Jan 21 13:44:46 compute-0 systemd[76413]: Reached target Basic System.
Jan 21 13:44:46 compute-0 systemd[76413]: Reached target Main User Target.
Jan 21 13:44:46 compute-0 systemd[76413]: Startup finished in 124ms.
Jan 21 13:44:46 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 21 13:44:46 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 21 13:44:46 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 21 13:44:46 compute-0 sshd-session[76409]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:46 compute-0 sshd-session[76426]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:46 compute-0 sudo[76433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:46 compute-0 sudo[76433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:46 compute-0 sudo[76433]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:46 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:47 compute-0 sshd-session[76458]: Accepted publickey for ceph-admin from 192.168.122.100 port 55874 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:47 compute-0 systemd-logind[780]: New session 24 of user ceph-admin.
Jan 21 13:44:47 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 21 13:44:47 compute-0 sshd-session[76458]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:47 compute-0 ceph-mon[75031]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:47 compute-0 ceph-mon[75031]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:47 compute-0 sudo[76462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 21 13:44:47 compute-0 sudo[76462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:47 compute-0 sudo[76462]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:47 compute-0 sshd-session[76487]: Accepted publickey for ceph-admin from 192.168.122.100 port 55886 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:47 compute-0 systemd-logind[780]: New session 25 of user ceph-admin.
Jan 21 13:44:47 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 21 13:44:47 compute-0 sshd-session[76487]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:47 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:47 compute-0 sudo[76491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 21 13:44:47 compute-0 sudo[76491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:47 compute-0 sudo[76491]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:47 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 21 13:44:47 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 21 13:44:47 compute-0 sshd-session[76516]: Accepted publickey for ceph-admin from 192.168.122.100 port 55892 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:47 compute-0 systemd-logind[780]: New session 26 of user ceph-admin.
Jan 21 13:44:47 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 21 13:44:47 compute-0 sshd-session[76516]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:47 compute-0 sudo[76520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:47 compute-0 sudo[76520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:47 compute-0 sudo[76520]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:48 compute-0 sshd-session[76545]: Accepted publickey for ceph-admin from 192.168.122.100 port 55898 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:48 compute-0 systemd-logind[780]: New session 27 of user ceph-admin.
Jan 21 13:44:48 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 21 13:44:48 compute-0 sshd-session[76545]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:48 compute-0 sudo[76549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:48 compute-0 sudo[76549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:48 compute-0 sudo[76549]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:48 compute-0 ceph-mon[75031]: Deploying cephadm binary to compute-0
Jan 21 13:44:48 compute-0 sshd-session[76574]: Accepted publickey for ceph-admin from 192.168.122.100 port 55904 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:48 compute-0 systemd-logind[780]: New session 28 of user ceph-admin.
Jan 21 13:44:48 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 21 13:44:48 compute-0 sshd-session[76574]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:48 compute-0 sudo[76578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 21 13:44:48 compute-0 sudo[76578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:48 compute-0 sudo[76578]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:48 compute-0 sshd-session[76603]: Accepted publickey for ceph-admin from 192.168.122.100 port 55906 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:48 compute-0 systemd-logind[780]: New session 29 of user ceph-admin.
Jan 21 13:44:48 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:48 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 21 13:44:48 compute-0 sshd-session[76603]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:49 compute-0 sudo[76607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:49 compute-0 sudo[76607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:49 compute-0 sudo[76607]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:49 compute-0 sshd-session[76632]: Accepted publickey for ceph-admin from 192.168.122.100 port 55918 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:49 compute-0 systemd-logind[780]: New session 30 of user ceph-admin.
Jan 21 13:44:49 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 21 13:44:49 compute-0 sshd-session[76632]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:49 compute-0 sudo[76636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 21 13:44:49 compute-0 sudo[76636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:49 compute-0 sudo[76636]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:49 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:49 compute-0 sshd-session[76661]: Accepted publickey for ceph-admin from 192.168.122.100 port 55932 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:49 compute-0 systemd-logind[780]: New session 31 of user ceph-admin.
Jan 21 13:44:49 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 21 13:44:49 compute-0 sshd-session[76661]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:50 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:44:51 compute-0 sshd-session[76688]: Accepted publickey for ceph-admin from 192.168.122.100 port 45756 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:51 compute-0 systemd-logind[780]: New session 32 of user ceph-admin.
Jan 21 13:44:51 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 21 13:44:51 compute-0 sshd-session[76688]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:51 compute-0 sudo[76692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 21 13:44:51 compute-0 sudo[76692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:51 compute-0 sudo[76692]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:51 compute-0 sshd-session[76717]: Accepted publickey for ceph-admin from 192.168.122.100 port 45766 ssh2: RSA SHA256:ZUstxBAtBK1FxBFGrrPx/2S50oPJ0zBTneC9XzeEPlk
Jan 21 13:44:51 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:51 compute-0 systemd-logind[780]: New session 33 of user ceph-admin.
Jan 21 13:44:51 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 21 13:44:51 compute-0 sshd-session[76717]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 21 13:44:51 compute-0 sudo[76721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 21 13:44:51 compute-0 sudo[76721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:52 compute-0 sudo[76721]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 13:44:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:52 compute-0 ceph-mgr[75322]: [cephadm INFO root] Added host compute-0
Jan 21 13:44:52 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 13:44:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 13:44:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:52 compute-0 recursing_mahavira[76383]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 13:44:52 compute-0 systemd[1]: libpod-1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d.scope: Deactivated successfully.
Jan 21 13:44:52 compute-0 podman[76367]: 2026-01-21 13:44:52.088410487 +0000 UTC m=+6.291441543 container died 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:44:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-62445611cfc9fd346bccf6674962707c0dd7f83d2b8e0bd7b33d26a0fb5732e9-merged.mount: Deactivated successfully.
Jan 21 13:44:52 compute-0 sudo[76765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:52 compute-0 sudo[76765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:52 compute-0 sudo[76765]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:52 compute-0 podman[76367]: 2026-01-21 13:44:52.140255698 +0000 UTC m=+6.343286754 container remove 1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:44:52 compute-0 systemd[1]: libpod-conmon-1dcdcc70f42a0f92145766a02a8bb82c7171a17c2bb25d4f8d11215b3302926d.scope: Deactivated successfully.
Jan 21 13:44:52 compute-0 sudo[76806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 21 13:44:52 compute-0 sudo[76806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.202919122 +0000 UTC m=+0.041806339 container create 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:52 compute-0 systemd[1]: Started libpod-conmon-88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf.scope.
Jan 21 13:44:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.275462308 +0000 UTC m=+0.114349545 container init 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.181522846 +0000 UTC m=+0.020410083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.28122079 +0000 UTC m=+0.120108007 container start 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.284761261 +0000 UTC m=+0.123648478 container attach 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:52 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 21 13:44:52 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 21 13:44:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 13:44:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:52 compute-0 unruffled_haibt[76848]: Scheduled mon update...
Jan 21 13:44:52 compute-0 systemd[1]: libpod-88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf.scope: Deactivated successfully.
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.7371517 +0000 UTC m=+0.576038927 container died 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:44:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb5329de35a6ba6e53fee01dc1a77263d3a90272ff52c47c0be6f023631427f8-merged.mount: Deactivated successfully.
Jan 21 13:44:52 compute-0 podman[76821]: 2026-01-21 13:44:52.787207025 +0000 UTC m=+0.626094252 container remove 88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf (image=quay.io/ceph/ceph:v20, name=unruffled_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:52 compute-0 systemd[1]: libpod-conmon-88907a2731237ffdd134a6184ca068011240282d9c0560677c442790b17382cf.scope: Deactivated successfully.
Jan 21 13:44:52 compute-0 podman[76912]: 2026-01-21 13:44:52.857391907 +0000 UTC m=+0.044536037 container create 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:52 compute-0 podman[76865]: 2026-01-21 13:44:52.879817027 +0000 UTC m=+0.497502895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:52 compute-0 systemd[1]: Started libpod-conmon-2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948.scope.
Jan 21 13:44:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:52 compute-0 podman[76912]: 2026-01-21 13:44:52.837147387 +0000 UTC m=+0.024291607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:52 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:52 compute-0 podman[76912]: 2026-01-21 13:44:52.949975429 +0000 UTC m=+0.137119649 container init 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:52 compute-0 podman[76912]: 2026-01-21 13:44:52.956974089 +0000 UTC m=+0.144118239 container start 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:44:52 compute-0 podman[76912]: 2026-01-21 13:44:52.961896339 +0000 UTC m=+0.149040499 container attach 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:52 compute-0 podman[76946]: 2026-01-21 13:44:52.986801404 +0000 UTC m=+0.035742530 container create 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:53 compute-0 systemd[1]: Started libpod-conmon-2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462.scope.
Jan 21 13:44:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:53 compute-0 podman[76946]: 2026-01-21 13:44:53.068289528 +0000 UTC m=+0.117230654 container init 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:53 compute-0 podman[76946]: 2026-01-21 13:44:52.970902348 +0000 UTC m=+0.019843494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:53 compute-0 podman[76946]: 2026-01-21 13:44:53.073100497 +0000 UTC m=+0.122041663 container start 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 13:44:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:53 compute-0 ceph-mon[75031]: Added host compute-0
Jan 21 13:44:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:44:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:53 compute-0 podman[76946]: 2026-01-21 13:44:53.076658427 +0000 UTC m=+0.125599623 container attach 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:53 compute-0 wonderful_poitras[76964]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 21 13:44:53 compute-0 systemd[1]: libpod-2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462.scope: Deactivated successfully.
Jan 21 13:44:53 compute-0 podman[76946]: 2026-01-21 13:44:53.185236188 +0000 UTC m=+0.234177304 container died 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ef147c3a38316ba7ab25ac4f80e18fd308bea7e1e1bcf858d38481b22643b1f-merged.mount: Deactivated successfully.
Jan 21 13:44:53 compute-0 podman[76946]: 2026-01-21 13:44:53.223882659 +0000 UTC m=+0.272823785 container remove 2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462 (image=quay.io/ceph/ceph:v20, name=wonderful_poitras, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:53 compute-0 sudo[76806]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:53 compute-0 systemd[1]: libpod-conmon-2d970b6804b83d38c7d5490daed2869dda7c487dd08f64426e0d37a03aae3462.scope: Deactivated successfully.
Jan 21 13:44:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 21 13:44:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:53 compute-0 sudo[77000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:53 compute-0 sudo[77000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:53 compute-0 sudo[77000]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:53 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 21 13:44:53 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 21 13:44:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 13:44:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:53 compute-0 nervous_franklin[76931]: Scheduled mgr update...
Jan 21 13:44:53 compute-0 sudo[77025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 21 13:44:53 compute-0 sudo[77025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:53 compute-0 systemd[1]: libpod-2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948.scope: Deactivated successfully.
Jan 21 13:44:53 compute-0 podman[76912]: 2026-01-21 13:44:53.399540568 +0000 UTC m=+0.586684688 container died 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 13:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6acb2758ad73b69991dcfc9dea647f20eeffd71425c7ab8036fc70a28a2008a9-merged.mount: Deactivated successfully.
Jan 21 13:44:53 compute-0 podman[76912]: 2026-01-21 13:44:53.496641094 +0000 UTC m=+0.683785224 container remove 2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948 (image=quay.io/ceph/ceph:v20, name=nervous_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:53 compute-0 systemd[1]: libpod-conmon-2295cb6b162601bfa6401c8eb1d5297148b0bc0f80af1847f26493169a7b1948.scope: Deactivated successfully.
Jan 21 13:44:53 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:53 compute-0 podman[77066]: 2026-01-21 13:44:53.559106016 +0000 UTC m=+0.041916229 container create 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 13:44:53 compute-0 systemd[1]: Started libpod-conmon-14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e.scope.
Jan 21 13:44:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:53 compute-0 podman[77066]: 2026-01-21 13:44:53.634044616 +0000 UTC m=+0.116854899 container init 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 13:44:53 compute-0 podman[77066]: 2026-01-21 13:44:53.541597656 +0000 UTC m=+0.024407879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:53 compute-0 podman[77066]: 2026-01-21 13:44:53.645229945 +0000 UTC m=+0.128040188 container start 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 13:44:53 compute-0 podman[77066]: 2026-01-21 13:44:53.649501876 +0000 UTC m=+0.132312089 container attach 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:44:53 compute-0 sudo[77025]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:44:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:53 compute-0 sudo[77107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:53 compute-0 sudo[77107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:53 compute-0 sudo[77107]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:53 compute-0 sudo[77142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:44:53 compute-0 sudo[77142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:54 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:54 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service crash spec with placement *
Jan 21 13:44:54 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 21 13:44:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 13:44:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:54 compute-0 condescending_sammet[77084]: Scheduled crash update...
Jan 21 13:44:54 compute-0 ceph-mon[75031]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:54 compute-0 ceph-mon[75031]: Saving service mon spec with placement count:5
Jan 21 13:44:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:54 compute-0 systemd[1]: libpod-14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e.scope: Deactivated successfully.
Jan 21 13:44:54 compute-0 podman[77066]: 2026-01-21 13:44:54.396891868 +0000 UTC m=+0.879702101 container died 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:54 compute-0 podman[77221]: 2026-01-21 13:44:54.424441612 +0000 UTC m=+0.087205386 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-865fb27c6dc4463e0db2c466d400c9a55cfea9379c78615f38e27f9e2617ed55-merged.mount: Deactivated successfully.
Jan 21 13:44:54 compute-0 podman[77066]: 2026-01-21 13:44:54.449315537 +0000 UTC m=+0.932125740 container remove 14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e (image=quay.io/ceph/ceph:v20, name=condescending_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:54 compute-0 systemd[1]: libpod-conmon-14f6547a79d75b0d0c05e309916a85d8897908d7e51af96fb90a4fe78e32d66e.scope: Deactivated successfully.
Jan 21 13:44:54 compute-0 podman[77221]: 2026-01-21 13:44:54.515299159 +0000 UTC m=+0.178062933 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:54 compute-0 podman[77252]: 2026-01-21 13:44:54.537514906 +0000 UTC m=+0.055746598 container create 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 13:44:54 compute-0 systemd[1]: Started libpod-conmon-1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b.scope.
Jan 21 13:44:54 compute-0 podman[77252]: 2026-01-21 13:44:54.508508262 +0000 UTC m=+0.026739953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:54 compute-0 podman[77252]: 2026-01-21 13:44:54.618850048 +0000 UTC m=+0.137081719 container init 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:44:54 compute-0 podman[77252]: 2026-01-21 13:44:54.623908119 +0000 UTC m=+0.142139770 container start 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:54 compute-0 podman[77252]: 2026-01-21 13:44:54.629437118 +0000 UTC m=+0.147668839 container attach 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:44:54 compute-0 sudo[77142]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:44:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:54 compute-0 sudo[77345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:54 compute-0 sudo[77345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:54 compute-0 sudo[77345]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:54 compute-0 sudo[77370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:44:54 compute-0 sudo[77370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:54 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 21 13:44:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3817281900' entity='client.admin' 
Jan 21 13:44:55 compute-0 systemd[1]: libpod-1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b.scope: Deactivated successfully.
Jan 21 13:44:55 compute-0 podman[77252]: 2026-01-21 13:44:55.103183113 +0000 UTC m=+0.621414764 container died 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-248d8bd823c8b0ffff623068d01fcc3c4c405e76674813b460d3ad954b44cb04-merged.mount: Deactivated successfully.
Jan 21 13:44:55 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77419 (sysctl)
Jan 21 13:44:55 compute-0 podman[77252]: 2026-01-21 13:44:55.144168938 +0000 UTC m=+0.662400589 container remove 1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b (image=quay.io/ceph/ceph:v20, name=charming_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:55 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 21 13:44:55 compute-0 systemd[1]: libpod-conmon-1903cdd94c7f1c2fd79a33b536f77b474da47331d3e9528a78c1018e80f6391b.scope: Deactivated successfully.
Jan 21 13:44:55 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.223834326 +0000 UTC m=+0.057426951 container create b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:44:55 compute-0 systemd[1]: Started libpod-conmon-b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd.scope.
Jan 21 13:44:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.201123492 +0000 UTC m=+0.034716207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.304991715 +0000 UTC m=+0.138584360 container init b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.31020874 +0000 UTC m=+0.143801365 container start b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.314000993 +0000 UTC m=+0.147593628 container attach b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:44:55 compute-0 ceph-mon[75031]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:55 compute-0 ceph-mon[75031]: Saving service mgr spec with placement count:2
Jan 21 13:44:55 compute-0 ceph-mon[75031]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:55 compute-0 ceph-mon[75031]: Saving service crash spec with placement *
Jan 21 13:44:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:55 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3817281900' entity='client.admin' 
Jan 21 13:44:55 compute-0 sudo[77370]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:55 compute-0 sudo[77481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:55 compute-0 sudo[77481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:55 compute-0 sudo[77481]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:55 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:55 compute-0 sudo[77506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 21 13:44:55 compute-0 sudo[77506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 21 13:44:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:55 compute-0 systemd[1]: libpod-b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd.scope: Deactivated successfully.
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.765027033 +0000 UTC m=+0.598619658 container died b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-03748b1f16ab7d7dbdd282e49648aabda90e655558713999a028316c589814b2-merged.mount: Deactivated successfully.
Jan 21 13:44:55 compute-0 podman[77422]: 2026-01-21 13:44:55.814097893 +0000 UTC m=+0.647690518 container remove b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd (image=quay.io/ceph/ceph:v20, name=inspiring_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:55 compute-0 systemd[1]: libpod-conmon-b85281a6258e777b7c0c9c93ade97d8d0bcd5496fe8b72abe1a8e9be6bc229dd.scope: Deactivated successfully.
Jan 21 13:44:55 compute-0 sudo[77506]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:55 compute-0 podman[77557]: 2026-01-21 13:44:55.884793283 +0000 UTC m=+0.045242506 container create 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 13:44:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:44:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:55 compute-0 systemd[1]: Started libpod-conmon-29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2.scope.
Jan 21 13:44:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:55 compute-0 sudo[77578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:44:55 compute-0 podman[77557]: 2026-01-21 13:44:55.867334354 +0000 UTC m=+0.027783597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:55 compute-0 sudo[77578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:55 compute-0 sudo[77578]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:55 compute-0 podman[77557]: 2026-01-21 13:44:55.976689566 +0000 UTC m=+0.137138839 container init 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:55 compute-0 podman[77557]: 2026-01-21 13:44:55.982824893 +0000 UTC m=+0.143274116 container start 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:55 compute-0 podman[77557]: 2026-01-21 13:44:55.986498046 +0000 UTC m=+0.146947319 container attach 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:56 compute-0 sudo[77609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- inventory --format=json-pretty --filter-for-batch
Jan 21 13:44:56 compute-0 sudo[77609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.304396515 +0000 UTC m=+0.055547855 container create 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:56 compute-0 systemd[1]: Started libpod-conmon-15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a.scope.
Jan 21 13:44:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.270918647 +0000 UTC m=+0.022070027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:44:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:56 compute-0 ceph-mgr[75322]: [cephadm INFO root] Added label _admin to host compute-0
Jan 21 13:44:56 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 21 13:44:56 compute-0 romantic_swanson[77601]: Added label _admin to host compute-0
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.383305661 +0000 UTC m=+0.134457021 container init 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:56 compute-0 systemd[1]: libpod-29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2.scope: Deactivated successfully.
Jan 21 13:44:56 compute-0 podman[77557]: 2026-01-21 13:44:56.392600784 +0000 UTC m=+0.553050017 container died 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.394276308 +0000 UTC m=+0.145427638 container start 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:44:56 compute-0 distracted_banach[77683]: 167 167
Jan 21 13:44:56 compute-0 systemd[1]: libpod-15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a.scope: Deactivated successfully.
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.407165842 +0000 UTC m=+0.158317202 container attach 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.407620278 +0000 UTC m=+0.158771608 container died 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1446e02596ef256c491643f9f5d073a0ae2e4cab120adc6855b36262759246f3-merged.mount: Deactivated successfully.
Jan 21 13:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-19ea683f36b1b44efca8350a05d70858a3b9c19a93dd85360f3132181991aa52-merged.mount: Deactivated successfully.
Jan 21 13:44:56 compute-0 podman[77557]: 2026-01-21 13:44:56.473382827 +0000 UTC m=+0.633832060 container remove 29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2 (image=quay.io/ceph/ceph:v20, name=romantic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:56 compute-0 podman[77666]: 2026-01-21 13:44:56.480672082 +0000 UTC m=+0.231823412 container remove 15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:56 compute-0 systemd[1]: libpod-conmon-15e303364e8ccfd194fa9d372981cae2fed48bf5532b22ba50247817d4c8749a.scope: Deactivated successfully.
Jan 21 13:44:56 compute-0 systemd[1]: libpod-conmon-29d711089f804348090c1fbb7ae16b7ce2b54a3355ada4aa6e033179d43093b2.scope: Deactivated successfully.
Jan 21 13:44:56 compute-0 podman[77713]: 2026-01-21 13:44:56.543283615 +0000 UTC m=+0.046413933 container create a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:44:56 compute-0 systemd[1]: Started libpod-conmon-a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343.scope.
Jan 21 13:44:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:56 compute-0 podman[77713]: 2026-01-21 13:44:56.521413254 +0000 UTC m=+0.024543672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:56 compute-0 podman[77713]: 2026-01-21 13:44:56.616179786 +0000 UTC m=+0.119310134 container init a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:56 compute-0 podman[77713]: 2026-01-21 13:44:56.621524633 +0000 UTC m=+0.124654971 container start a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:44:56 compute-0 podman[77713]: 2026-01-21 13:44:56.627210254 +0000 UTC m=+0.130340672 container attach a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 13:44:56 compute-0 ceph-mon[75031]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:56 compute-0 ceph-mon[75031]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:44:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:56 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 21 13:44:57 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1529798143' entity='client.admin' 
Jan 21 13:44:57 compute-0 magical_snyder[77729]: set mgr/dashboard/cluster/status
Jan 21 13:44:57 compute-0 systemd[1]: libpod-a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343.scope: Deactivated successfully.
Jan 21 13:44:57 compute-0 podman[77713]: 2026-01-21 13:44:57.189347441 +0000 UTC m=+0.692477839 container died a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6685dce33b48c779b26398c0c6a17bca15f484a549f38128527a40a467a70df7-merged.mount: Deactivated successfully.
Jan 21 13:44:57 compute-0 podman[77713]: 2026-01-21 13:44:57.232626589 +0000 UTC m=+0.735756937 container remove a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343 (image=quay.io/ceph/ceph:v20, name=magical_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 13:44:57 compute-0 systemd[1]: libpod-conmon-a52d7c0646368a0f38c57d41cc41e0b3afe8a0fb6c8746cde197332617155343.scope: Deactivated successfully.
Jan 21 13:44:57 compute-0 systemd[1]: Reloading.
Jan 21 13:44:57 compute-0 systemd-sysv-generator[77800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:44:57 compute-0 systemd-rc-local-generator[77795]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:44:57 compute-0 ceph-mgr[75322]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 21 13:44:57 compute-0 sudo[73985]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:57 compute-0 podman[77814]: 2026-01-21 13:44:57.6915212 +0000 UTC m=+0.040222444 container create 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:44:57 compute-0 systemd[1]: Started libpod-conmon-19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b.scope.
Jan 21 13:44:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:57 compute-0 podman[77814]: 2026-01-21 13:44:57.674291225 +0000 UTC m=+0.022992469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:44:57 compute-0 podman[77814]: 2026-01-21 13:44:57.777067162 +0000 UTC m=+0.125768426 container init 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:57 compute-0 podman[77814]: 2026-01-21 13:44:57.78672427 +0000 UTC m=+0.135425524 container start 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:57 compute-0 podman[77814]: 2026-01-21 13:44:57.790773988 +0000 UTC m=+0.139475292 container attach 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:44:57 compute-0 sudo[77858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjxaroqwafpfqzrjwdyxxogjmonnlre ; /usr/bin/python3'
Jan 21 13:44:57 compute-0 sudo[77858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:44:58 compute-0 python3[77860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:44:58 compute-0 podman[77866]: 2026-01-21 13:44:58.120373354 +0000 UTC m=+0.040331537 container create 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:44:58 compute-0 systemd[1]: Started libpod-conmon-54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6.scope.
Jan 21 13:44:58 compute-0 ceph-mon[75031]: Added label _admin to host compute-0
Jan 21 13:44:58 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1529798143' entity='client.admin' 
Jan 21 13:44:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598713f7f5c33ec19ecbaaa0e7067df43ed44c758bfebf4dc9a4d427a489af9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598713f7f5c33ec19ecbaaa0e7067df43ed44c758bfebf4dc9a4d427a489af9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:44:58 compute-0 podman[77866]: 2026-01-21 13:44:58.194163407 +0000 UTC m=+0.114121610 container init 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:58 compute-0 podman[77866]: 2026-01-21 13:44:58.102015442 +0000 UTC m=+0.021973655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:44:58 compute-0 podman[77866]: 2026-01-21 13:44:58.200319256 +0000 UTC m=+0.120277439 container start 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:44:58 compute-0 podman[77866]: 2026-01-21 13:44:58.204051338 +0000 UTC m=+0.124009541 container attach 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:44:58 compute-0 strange_bhabha[77830]: [
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:     {
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "available": false,
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "being_replaced": false,
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "ceph_device_lvm": false,
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "lsm_data": {},
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "lvs": [],
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "path": "/dev/sr0",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "rejected_reasons": [
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "Insufficient space (<5GB)",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "Has a FileSystem"
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         ],
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         "sys_api": {
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "actuators": null,
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "device_nodes": [
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:                 "sr0"
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             ],
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "devname": "sr0",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "human_readable_size": "482.00 KB",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "id_bus": "ata",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "model": "QEMU DVD-ROM",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "nr_requests": "2",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "parent": "/dev/sr0",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "partitions": {},
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "path": "/dev/sr0",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "removable": "1",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "rev": "2.5+",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "ro": "0",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "rotational": "1",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "sas_address": "",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "sas_device_handle": "",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "scheduler_mode": "mq-deadline",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "sectors": 0,
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "sectorsize": "2048",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "size": 493568.0,
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "support_discard": "2048",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "type": "disk",
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:             "vendor": "QEMU"
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:         }
Jan 21 13:44:58 compute-0 strange_bhabha[77830]:     }
Jan 21 13:44:58 compute-0 strange_bhabha[77830]: ]
Jan 21 13:44:58 compute-0 systemd[1]: libpod-19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b.scope: Deactivated successfully.
Jan 21 13:44:58 compute-0 podman[77814]: 2026-01-21 13:44:58.291547488 +0000 UTC m=+0.640248762 container died 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 13:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-baf78ca496cc18a9bddb2f2db6711b9c8e3551657aee3c1decfc5d560553eafe-merged.mount: Deactivated successfully.
Jan 21 13:44:58 compute-0 podman[77814]: 2026-01-21 13:44:58.334356629 +0000 UTC m=+0.683057873 container remove 19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:44:58 compute-0 systemd[1]: libpod-conmon-19c1e81d51ed397f3a082876fdb127aadf6a48f9937d2814c81d8e190fa39f9b.scope: Deactivated successfully.
Jan 21 13:44:58 compute-0 sudo[77609]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:44:58 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 21 13:44:58 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 21 13:44:58 compute-0 sudo[78527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 21 13:44:58 compute-0 sudo[78527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78527]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 sudo[78552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph
Jan 21 13:44:58 compute-0 sudo[78552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78552]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 21 13:44:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3310542890' entity='client.admin' 
Jan 21 13:44:58 compute-0 systemd[1]: libpod-54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6.scope: Deactivated successfully.
Jan 21 13:44:58 compute-0 sudo[78577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.conf.new
Jan 21 13:44:58 compute-0 sudo[78577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78577]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 podman[78603]: 2026-01-21 13:44:58.666834847 +0000 UTC m=+0.037290574 container died 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:44:58 compute-0 sudo[78605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:58 compute-0 sudo[78605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78605]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 sudo[78642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.conf.new
Jan 21 13:44:58 compute-0 sudo[78642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78642]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 sudo[78690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.conf.new
Jan 21 13:44:58 compute-0 sudo[78690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78690]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 sudo[78715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.conf.new
Jan 21 13:44:58 compute-0 sudo[78715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78715]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:44:58 compute-0 sudo[78740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 21 13:44:58 compute-0 sudo[78740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:58 compute-0 sudo[78740]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:58 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 13:44:58 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 13:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-598713f7f5c33ec19ecbaaa0e7067df43ed44c758bfebf4dc9a4d427a489af9e-merged.mount: Deactivated successfully.
Jan 21 13:44:59 compute-0 sudo[78765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config
Jan 21 13:44:59 compute-0 sudo[78765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78765]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 podman[78603]: 2026-01-21 13:44:59.043751028 +0000 UTC m=+0.414206715 container remove 54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6 (image=quay.io/ceph/ceph:v20, name=eager_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:44:59 compute-0 systemd[1]: libpod-conmon-54b100496c862917e6e6f6d62b85164fa37a47018026644c5b3a1408af317ac6.scope: Deactivated successfully.
Jan 21 13:44:59 compute-0 sudo[77858]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[78792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config
Jan 21 13:44:59 compute-0 sudo[78792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78792]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[78817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf.new
Jan 21 13:44:59 compute-0 sudo[78817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78817]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[78842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:59 compute-0 sudo[78842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78842]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[78867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf.new
Jan 21 13:44:59 compute-0 sudo[78867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78867]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:44:59 compute-0 ceph-mon[75031]: Updating compute-0:/etc/ceph/ceph.conf
Jan 21 13:44:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3310542890' entity='client.admin' 
Jan 21 13:44:59 compute-0 ceph-mon[75031]: Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 13:44:59 compute-0 sudo[78920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf.new
Jan 21 13:44:59 compute-0 sudo[78920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78920]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[78985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf.new
Jan 21 13:44:59 compute-0 sudo[78985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[78985]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 ceph-mgr[75322]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 21 13:44:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:44:59 compute-0 ceph-mon[75031]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 13:44:59 compute-0 sudo[79029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf.new /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.conf
Jan 21 13:44:59 compute-0 sudo[79029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79029]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 13:44:59 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 13:44:59 compute-0 sudo[79065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 21 13:44:59 compute-0 sudo[79065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79065]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[79090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph
Jan 21 13:44:59 compute-0 sudo[79090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79090]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[79128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.client.admin.keyring.new
Jan 21 13:44:59 compute-0 sudo[79128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79128]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[79186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:44:59 compute-0 sudo[79186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79186]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[79242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzuoiqtrwyzzbyvkbjwsoaxrtunnjxbl ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003099.4142342-36484-9282059414558/async_wrapper.py j808190305125 30 /home/zuul/.ansible/tmp/ansible-tmp-1769003099.4142342-36484-9282059414558/AnsiballZ_command.py _'
Jan 21 13:44:59 compute-0 sudo[79242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:44:59 compute-0 sudo[79235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.client.admin.keyring.new
Jan 21 13:44:59 compute-0 sudo[79235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79235]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 sudo[79288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.client.admin.keyring.new
Jan 21 13:44:59 compute-0 sudo[79288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:44:59 compute-0 sudo[79288]: pam_unix(sudo:session): session closed for user root
Jan 21 13:44:59 compute-0 ansible-async_wrapper.py[79262]: Invoked with j808190305125 30 /home/zuul/.ansible/tmp/ansible-tmp-1769003099.4142342-36484-9282059414558/AnsiballZ_command.py _
Jan 21 13:45:00 compute-0 ansible-async_wrapper.py[79322]: Starting module and watcher
Jan 21 13:45:00 compute-0 ansible-async_wrapper.py[79322]: Start watching 79325 (30)
Jan 21 13:45:00 compute-0 ansible-async_wrapper.py[79325]: Start module (79325)
Jan 21 13:45:00 compute-0 ansible-async_wrapper.py[79262]: Return async_wrapper task started.
Jan 21 13:45:00 compute-0 sudo[79242]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.client.admin.keyring.new
Jan 21 13:45:00 compute-0 sudo[79313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79313]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 21 13:45:00 compute-0 sudo[79343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79343]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 13:45:00 compute-0 sudo[79368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config
Jan 21 13:45:00 compute-0 python3[79333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:00 compute-0 sudo[79368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79368]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.224005401 +0000 UTC m=+0.051225672 container create fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:00 compute-0 sudo[79394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79394]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 systemd[1]: Started libpod-conmon-fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10.scope.
Jan 21 13:45:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e99922a2c4187a40db912742cc875f27ce87443a7df6171de89a2e09562d212/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e99922a2c4187a40db912742cc875f27ce87443a7df6171de89a2e09562d212/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:00 compute-0 sudo[79431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring.new
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.202899929 +0000 UTC m=+0.030120240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:00 compute-0 sudo[79431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79431]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.311274916 +0000 UTC m=+0.138495237 container init fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.320288336 +0000 UTC m=+0.147508607 container start fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.32406792 +0000 UTC m=+0.151288221 container attach fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:00 compute-0 sudo[79461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:00 compute-0 sudo[79461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79461]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring.new
Jan 21 13:45:00 compute-0 sudo[79487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:00 compute-0 ceph-mon[75031]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 21 13:45:00 compute-0 ceph-mon[75031]: Updating compute-0:/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 13:45:00 compute-0 sudo[79487]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring.new
Jan 21 13:45:00 compute-0 sudo[79554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79554]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring.new
Jan 21 13:45:00 compute-0 sudo[79579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79579]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 sudo[79604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2f0e9cad-f0a3-5869-9cc3-8d84d071866a/var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring.new /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/config/ceph.client.admin.keyring
Jan 21 13:45:00 compute-0 sudo[79604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79604]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 85295e5b-4a25-407c-8383-f553a6b980c4 (Updating crash deployment (+1 -> 1))
Jan 21 13:45:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 21 13:45:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 13:45:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 21 13:45:00 compute-0 sudo[79629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:45:00 compute-0 agitated_jemison[79446]: 
Jan 21 13:45:00 compute-0 agitated_jemison[79446]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 13:45:00 compute-0 sudo[79629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 sudo[79629]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:00 compute-0 systemd[1]: libpod-fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10.scope: Deactivated successfully.
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.774349629 +0000 UTC m=+0.601569900 container died fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 13:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e99922a2c4187a40db912742cc875f27ce87443a7df6171de89a2e09562d212-merged.mount: Deactivated successfully.
Jan 21 13:45:00 compute-0 podman[79392]: 2026-01-21 13:45:00.817471204 +0000 UTC m=+0.644691485 container remove fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:00 compute-0 systemd[1]: libpod-conmon-fa465992b67bd83cbe0d91d8cc0d833b08c190d2fd8dbe88e5468bb64e5f6d10.scope: Deactivated successfully.
Jan 21 13:45:00 compute-0 sudo[79656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:00 compute-0 sudo[79656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:00 compute-0 ansible-async_wrapper.py[79325]: Module complete (79325)
Jan 21 13:45:00 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.257175163 +0000 UTC m=+0.037007669 container create 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:01 compute-0 systemd[1]: Started libpod-conmon-40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23.scope.
Jan 21 13:45:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:01 compute-0 sudo[79798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grloinokhovjrtqmoaxjzekyttciuuwq ; /usr/bin/python3'
Jan 21 13:45:01 compute-0 sudo[79798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.330033893 +0000 UTC m=+0.109866429 container init 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.239334868 +0000 UTC m=+0.019167394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.337951206 +0000 UTC m=+0.117783712 container start 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 13:45:01 compute-0 practical_bhaskara[79792]: 167 167
Jan 21 13:45:01 compute-0 systemd[1]: libpod-40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23.scope: Deactivated successfully.
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.342962107 +0000 UTC m=+0.122794633 container attach 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.343920901 +0000 UTC m=+0.123753417 container died 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 13:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe9148d836ab8cd088f8cf00abee498eeb71d36d46fba320aa6bf7a9aea6d90-merged.mount: Deactivated successfully.
Jan 21 13:45:01 compute-0 podman[79756]: 2026-01-21 13:45:01.383903953 +0000 UTC m=+0.163736459 container remove 40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:45:01 compute-0 systemd[1]: libpod-conmon-40eb1d6877e3baf80b809a7f3c2e2aece263e79c6fbf25b75cb6b835ac766c23.scope: Deactivated successfully.
Jan 21 13:45:01 compute-0 systemd[1]: Reloading.
Jan 21 13:45:01 compute-0 python3[79800]: ansible-ansible.legacy.async_status Invoked with jid=j808190305125.79262 mode=status _async_dir=/root/.ansible_async
Jan 21 13:45:01 compute-0 sudo[79798]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:01 compute-0 systemd-rc-local-generator[79836]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:01 compute-0 systemd-sysv-generator[79840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:01 compute-0 ceph-mon[75031]: Deploying daemon crash.compute-0 on compute-0
Jan 21 13:45:01 compute-0 ceph-mon[75031]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:45:01 compute-0 sudo[79895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drtzkfgxlgpjapazlbfurasyrdhvcjxf ; /usr/bin/python3'
Jan 21 13:45:01 compute-0 sudo[79895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:01 compute-0 systemd[1]: Reloading.
Jan 21 13:45:01 compute-0 systemd-sysv-generator[79930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:01 compute-0 systemd-rc-local-generator[79927]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:01 compute-0 python3[79899]: ansible-ansible.legacy.async_status Invoked with jid=j808190305125.79262 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 13:45:01 compute-0 sudo[79895]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:01 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:45:02 compute-0 sudo[80021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaxepquqadavlmugwakfbpalmozjdyjp ; /usr/bin/python3'
Jan 21 13:45:02 compute-0 sudo[80021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:02 compute-0 podman[79988]: 2026-01-21 13:45:02.195588952 +0000 UTC m=+0.048049287 container create 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d180385bcc2cdca3f04429398a0e604c6f7b1a5bb279ebb760cea64fd8c2cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 podman[79988]: 2026-01-21 13:45:02.178526178 +0000 UTC m=+0.030986553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:02 compute-0 podman[79988]: 2026-01-21 13:45:02.287429313 +0000 UTC m=+0.139889658 container init 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:02 compute-0 podman[79988]: 2026-01-21 13:45:02.295640781 +0000 UTC m=+0.148101116 container start 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:45:02 compute-0 bash[79988]: 52571d403aeaf640f6890095a6ccf83602c1d467929b5f9e357a86778560fdbc
Jan 21 13:45:02 compute-0 systemd[1]: Started Ceph crash.compute-0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 21 13:45:02 compute-0 sudo[79656]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:02 compute-0 python3[80026]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:02 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 85295e5b-4a25-407c-8383-f553a6b980c4 (Updating crash deployment (+1 -> 1))
Jan 21 13:45:02 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 85295e5b-4a25-407c-8383-f553a6b980c4 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 21 13:45:02 compute-0 sudo[80021]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:02 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev fa78ad26-c6ea-4526-bd19-a2f94f2692d1 (Updating mgr deployment (+1 -> 2))
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr services"} : dispatch
Jan 21 13:45:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:02 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:02 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.dxoawe on compute-0
Jan 21 13:45:02 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.dxoawe on compute-0
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.444+0000 7f2f44a0b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.444+0000 7f2f44a0b640 -1 AuthRegistry(0x7f2f40052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.445+0000 7f2f44a0b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.445+0000 7f2f44a0b640 -1 AuthRegistry(0x7f2f44a09fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.446+0000 7f2f3e575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: 2026-01-21T13:45:02.446+0000 7f2f44a0b640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 21 13:45:02 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-crash-compute-0[80029]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 21 13:45:02 compute-0 sudo[80038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:02 compute-0 sudo[80038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:02 compute-0 sudo[80038]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:02 compute-0 sudo[80073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:02 compute-0 sudo[80073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:02 compute-0 sudo[80131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuriwwnxdyaamncsvgnujhkcdfjmjpzq ; /usr/bin/python3'
Jan 21 13:45:02 compute-0 sudo[80131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:02 compute-0 python3[80136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:02 compute-0 podman[80162]: 2026-01-21 13:45:02.871772276 +0000 UTC m=+0.038307997 container create 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:02 compute-0 systemd[1]: Started libpod-conmon-3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52.scope.
Jan 21 13:45:02 compute-0 podman[80176]: 2026-01-21 13:45:02.90842578 +0000 UTC m=+0.034737717 container create 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:45:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:02 compute-0 systemd[1]: Started libpod-conmon-345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a.scope.
Jan 21 13:45:02 compute-0 podman[80162]: 2026-01-21 13:45:02.85447777 +0000 UTC m=+0.021013511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:02 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:02 compute-0 podman[80162]: 2026-01-21 13:45:02.95885699 +0000 UTC m=+0.125392731 container init 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:02 compute-0 podman[80162]: 2026-01-21 13:45:02.970614558 +0000 UTC m=+0.137150299 container start 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 13:45:02 compute-0 podman[80162]: 2026-01-21 13:45:02.974899349 +0000 UTC m=+0.141435070 container attach 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:02 compute-0 affectionate_turing[80191]: 167 167
Jan 21 13:45:02 compute-0 systemd[1]: libpod-3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52.scope: Deactivated successfully.
Jan 21 13:45:02 compute-0 podman[80176]: 2026-01-21 13:45:02.982239784 +0000 UTC m=+0.108551741 container init 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 13:45:02 compute-0 podman[80162]: 2026-01-21 13:45:02.983185278 +0000 UTC m=+0.149721029 container died 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:02 compute-0 podman[80176]: 2026-01-21 13:45:02.987913885 +0000 UTC m=+0.114225822 container start 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:02 compute-0 podman[80176]: 2026-01-21 13:45:02.894257087 +0000 UTC m=+0.020569054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:02 compute-0 podman[80176]: 2026-01-21 13:45:02.995123638 +0000 UTC m=+0.121435575 container attach 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e0fced8f281c2a1d9ed10b1b999592f4951067d92328eaf2282cdb43f7be97-merged.mount: Deactivated successfully.
Jan 21 13:45:03 compute-0 podman[80162]: 2026-01-21 13:45:03.024923724 +0000 UTC m=+0.191459445 container remove 3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:45:03 compute-0 systemd[1]: libpod-conmon-3e9cf1ce2fd2a54c8fb06e41f391bd425584d3fe17e36c5dda877c98289f2d52.scope: Deactivated successfully.
Jan 21 13:45:03 compute-0 systemd[1]: Reloading.
Jan 21 13:45:03 compute-0 systemd-rc-local-generator[80260]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:03 compute-0 systemd-sysv-generator[80264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:03 compute-0 ceph-mon[75031]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dxoawe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr services"} : dispatch
Jan 21 13:45:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:03 compute-0 systemd[1]: Reloading.
Jan 21 13:45:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:45:03 compute-0 gallant_khorana[80197]: 
Jan 21 13:45:03 compute-0 gallant_khorana[80197]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 13:45:03 compute-0 podman[80176]: 2026-01-21 13:45:03.430021788 +0000 UTC m=+0.556333725 container died 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:03 compute-0 systemd-rc-local-generator[80303]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:03 compute-0 systemd-sysv-generator[80306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:03 compute-0 systemd[1]: libpod-345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a.scope: Deactivated successfully.
Jan 21 13:45:03 compute-0 systemd[1]: Starting Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-931664503be5c68e5a065f5cf8d770aabbe13eb8634bcb721045cb550ec5f96c-merged.mount: Deactivated successfully.
Jan 21 13:45:03 compute-0 podman[80176]: 2026-01-21 13:45:03.681961905 +0000 UTC m=+0.808273842 container remove 345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a (image=quay.io/ceph/ceph:v20, name=gallant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:45:03 compute-0 systemd[1]: libpod-conmon-345ad8281350cc21c02bb23f4234e3959735b34f77d2a296ec6d837a0316082a.scope: Deactivated successfully.
Jan 21 13:45:03 compute-0 sudo[80131]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:03 compute-0 podman[80376]: 2026-01-21 13:45:03.933971623 +0000 UTC m=+0.049290374 container create 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:45:03 compute-0 sudo[80412]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayitltrposteixgaxyehrpmlauensuez ; /usr/bin/python3'
Jan 21 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861/merged/var/lib/ceph/mgr/ceph-compute-0.dxoawe supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:03 compute-0 sudo[80412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:04 compute-0 podman[80376]: 2026-01-21 13:45:04.001104932 +0000 UTC m=+0.116423723 container init 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:45:04 compute-0 podman[80376]: 2026-01-21 13:45:03.911891808 +0000 UTC m=+0.027210589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:04 compute-0 podman[80376]: 2026-01-21 13:45:04.006296336 +0000 UTC m=+0.121615087 container start 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:04 compute-0 bash[80376]: 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c
Jan 21 13:45:04 compute-0 systemd[1]: Started Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:45:04 compute-0 ceph-mgr[80421]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:45:04 compute-0 ceph-mgr[80421]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 21 13:45:04 compute-0 ceph-mgr[80421]: pidfile_write: ignore empty --pid-file
Jan 21 13:45:04 compute-0 sudo[80073]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:04 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'alerts'
Jan 21 13:45:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 13:45:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev fa78ad26-c6ea-4526-bd19-a2f94f2692d1 (Updating mgr deployment (+1 -> 2))
Jan 21 13:45:04 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event fa78ad26-c6ea-4526-bd19-a2f94f2692d1 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 21 13:45:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 13:45:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 python3[80419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:04 compute-0 sudo[80442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:45:04 compute-0 sudo[80442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:04 compute-0 sudo[80442]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:04 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'balancer'
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.198119955 +0000 UTC m=+0.052615572 container create 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 21 13:45:04 compute-0 systemd[1]: Started libpod-conmon-532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e.scope.
Jan 21 13:45:04 compute-0 sudo[80478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:04 compute-0 sudo[80478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:04 compute-0 sudo[80478]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.176842601 +0000 UTC m=+0.031338268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:04 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'cephadm'
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.293589268 +0000 UTC m=+0.148084915 container init 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.299375301 +0000 UTC m=+0.153870928 container start 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.303135965 +0000 UTC m=+0.157631592 container attach 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:45:04 compute-0 sudo[80512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:45:04 compute-0 sudo[80512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:04 compute-0 ceph-mon[75031]: Deploying daemon mgr.compute-0.dxoawe on compute-0
Jan 21 13:45:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 21 13:45:04 compute-0 podman[80603]: 2026-01-21 13:45:04.74831689 +0000 UTC m=+0.054665911 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 13:45:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3509207885' entity='client.admin' 
Jan 21 13:45:04 compute-0 systemd[1]: libpod-532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e.scope: Deactivated successfully.
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.77414729 +0000 UTC m=+0.628642937 container died 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 21 13:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a57b30e252a416daa513c9d73a71b29b1878f4b8b46e7f0db6d624232aa8eda-merged.mount: Deactivated successfully.
Jan 21 13:45:04 compute-0 podman[80457]: 2026-01-21 13:45:04.82389705 +0000 UTC m=+0.678392697 container remove 532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e (image=quay.io/ceph/ceph:v20, name=thirsty_cori, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:04 compute-0 systemd[1]: libpod-conmon-532835bd96ee7117874d45982bc0379dd6538f29538f76b7bb73aa359af69b7e.scope: Deactivated successfully.
Jan 21 13:45:04 compute-0 sudo[80412]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:04 compute-0 podman[80603]: 2026-01-21 13:45:04.849860691 +0000 UTC m=+0.156209692 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:04 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:05 compute-0 sudo[80711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnevubszcgmnbvetnoayztarsigbxnci ; /usr/bin/python3'
Jan 21 13:45:05 compute-0 sudo[80711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:05 compute-0 ansible-async_wrapper.py[79322]: Done in kid B.
Jan 21 13:45:05 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'crash'
Jan 21 13:45:05 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'dashboard'
Jan 21 13:45:05 compute-0 python3[80720]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.223701919 +0000 UTC m=+0.046448425 container create ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:45:05 compute-0 sudo[80512]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:05 compute-0 systemd[1]: Started libpod-conmon-ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db.scope.
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.205410817 +0000 UTC m=+0.028157343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.31203083 +0000 UTC m=+0.134777386 container init ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.31969874 +0000 UTC m=+0.142445246 container start ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.323596105 +0000 UTC m=+0.146342661 container attach ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:05 compute-0 sudo[80781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:45:05 compute-0 sudo[80781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:05 compute-0 sudo[80781]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3509207885' entity='client.admin' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 21 13:45:05 compute-0 sudo[80807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:05 compute-0 sudo[80807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:05 compute-0 sudo[80807]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:05 compute-0 sudo[80832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:05 compute-0 sudo[80832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259544767' entity='client.admin' 
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.715140836 +0000 UTC m=+0.039246592 container create 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.730984811 +0000 UTC m=+0.553731317 container died ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 13:45:05 compute-0 systemd[1]: libpod-ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db.scope: Deactivated successfully.
Jan 21 13:45:05 compute-0 systemd[1]: Started libpod-conmon-5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c.scope.
Jan 21 13:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d32c4e786e205b83806d875665df0700f69dd1cd9cbd16bfd9279406bc9c96f-merged.mount: Deactivated successfully.
Jan 21 13:45:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:05 compute-0 podman[80749]: 2026-01-21 13:45:05.768971324 +0000 UTC m=+0.591717830 container remove ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db (image=quay.io/ceph/ceph:v20, name=xenodochial_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 13:45:05 compute-0 systemd[1]: libpod-conmon-ed7c9424128d736dd644f6171670e8e52c18cddbe8ebafca86673afe31c4b3db.scope: Deactivated successfully.
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.783918958 +0000 UTC m=+0.108024764 container init 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:45:05 compute-0 sudo[80711]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.790201118 +0000 UTC m=+0.114306874 container start 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.794460809 +0000 UTC m=+0.118566615 container attach 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 13:45:05 compute-0 crazy_khorana[80917]: 167 167
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.79600515 +0000 UTC m=+0.120110946 container died 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:05 compute-0 systemd[1]: libpod-5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c.scope: Deactivated successfully.
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.700449166 +0000 UTC m=+0.024554942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b97f792012f04bae68a1b795cc58d4761c58e4af5bd1db096f149f44ee69589-merged.mount: Deactivated successfully.
Jan 21 13:45:05 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'devicehealth'
Jan 21 13:45:05 compute-0 podman[80892]: 2026-01-21 13:45:05.84152116 +0000 UTC m=+0.165626926 container remove 5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c (image=quay.io/ceph/ceph:v20, name=crazy_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:05 compute-0 systemd[1]: libpod-conmon-5edb7c78603780605dc9bc7f97d4f9a150513e3e5e0445c81b1e3e63ed12013c.scope: Deactivated successfully.
Jan 21 13:45:05 compute-0 sudo[80832]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:05 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'diskprediction_local'
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.tnwklj (unknown last config time)...
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.tnwklj (unknown last config time)...
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.tnwklj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tnwklj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr services"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.tnwklj on compute-0
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.tnwklj on compute-0
Jan 21 13:45:05 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 2 completed events
Jan 21 13:45:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:45:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:05 compute-0 sudo[80940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:05 compute-0 sudo[80940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:06 compute-0 sudo[80940]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:06 compute-0 sudo[80988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvyjdmkobbuyykelwimohjzaqwjnhho ; /usr/bin/python3'
Jan 21 13:45:06 compute-0 sudo[80988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:06 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe[80415]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 21 13:45:06 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe[80415]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 21 13:45:06 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe[80415]:   from numpy import show_config as show_numpy_config
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'influx'
Jan 21 13:45:06 compute-0 sudo[80989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:06 compute-0 sudo[80989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'insights'
Jan 21 13:45:06 compute-0 python3[80997]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'iostat'
Jan 21 13:45:06 compute-0 podman[81016]: 2026-01-21 13:45:06.252374197 +0000 UTC m=+0.066067914 container create ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'k8sevents'
Jan 21 13:45:06 compute-0 systemd[1]: Started libpod-conmon-ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6.scope.
Jan 21 13:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:06 compute-0 podman[81016]: 2026-01-21 13:45:06.230869089 +0000 UTC m=+0.044562826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:06 compute-0 podman[81016]: 2026-01-21 13:45:06.330058505 +0000 UTC m=+0.143752242 container init ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:06 compute-0 podman[81016]: 2026-01-21 13:45:06.337697745 +0000 UTC m=+0.151391482 container start ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:06 compute-0 podman[81016]: 2026-01-21 13:45:06.353591772 +0000 UTC m=+0.167285509 container attach ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 13:45:06 compute-0 podman[81049]: 2026-01-21 13:45:06.393032585 +0000 UTC m=+0.039093650 container create 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:45:06 compute-0 systemd[1]: Started libpod-conmon-7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01.scope.
Jan 21 13:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:06 compute-0 podman[81049]: 2026-01-21 13:45:06.376745033 +0000 UTC m=+0.022806128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:06 compute-0 podman[81049]: 2026-01-21 13:45:06.478918402 +0000 UTC m=+0.124979497 container init 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 13:45:06 compute-0 podman[81049]: 2026-01-21 13:45:06.484008104 +0000 UTC m=+0.130069169 container start 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 13:45:06 compute-0 podman[81049]: 2026-01-21 13:45:06.487344032 +0000 UTC m=+0.133405127 container attach 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:06 compute-0 frosty_hawking[81065]: 167 167
Jan 21 13:45:06 compute-0 systemd[1]: libpod-7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01.scope: Deactivated successfully.
Jan 21 13:45:06 compute-0 podman[81089]: 2026-01-21 13:45:06.536863529 +0000 UTC m=+0.030871972 container died 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-930f334f21f91547f8d4d1b218dd88a9ad229b1756f79b08ea7f510442fff92e-merged.mount: Deactivated successfully.
Jan 21 13:45:06 compute-0 podman[81089]: 2026-01-21 13:45:06.578744747 +0000 UTC m=+0.072753110 container remove 7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01 (image=quay.io/ceph/ceph:v20, name=frosty_hawking, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:06 compute-0 systemd[1]: libpod-conmon-7cd9eff3e4a6c1553681e884bd96ad2187db65704d09f9c86a1228e995dc2f01.scope: Deactivated successfully.
Jan 21 13:45:06 compute-0 sudo[80989]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'localpool'
Jan 21 13:45:06 compute-0 sudo[81104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:06 compute-0 sudo[81104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:06 compute-0 ceph-mon[75031]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:06 compute-0 sudo[81104]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/259544767' entity='client.admin' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: Reconfiguring mgr.compute-0.tnwklj (unknown last config time)...
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tnwklj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr services"} : dispatch
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:06 compute-0 ceph-mon[75031]: Reconfiguring daemon mgr.compute-0.tnwklj on compute-0
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 21 13:45:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'mds_autoscaler'
Jan 21 13:45:06 compute-0 sudo[81129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:45:06 compute-0 sudo[81129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:06 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:06 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'mirroring'
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'nfs'
Jan 21 13:45:07 compute-0 podman[81200]: 2026-01-21 13:45:07.19250701 +0000 UTC m=+0.085758365 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 13:45:07 compute-0 podman[81200]: 2026-01-21 13:45:07.313735481 +0000 UTC m=+0.206986806 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'orchestrator'
Jan 21 13:45:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'osd_perf_query'
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'osd_support'
Jan 21 13:45:07 compute-0 sudo[81129]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 21 13:45:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 21 13:45:07 compute-0 relaxed_edison[81043]: set require_min_compat_client to mimic
Jan 21 13:45:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 21 13:45:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:45:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:07 compute-0 systemd[1]: libpod-ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6.scope: Deactivated successfully.
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'pg_autoscaler'
Jan 21 13:45:07 compute-0 podman[81016]: 2026-01-21 13:45:07.749839768 +0000 UTC m=+1.563533515 container died ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 13:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5794805674bce61625bca5b2334d3a197188839c30b8b4b9b72922e850dd3c5b-merged.mount: Deactivated successfully.
Jan 21 13:45:07 compute-0 podman[81016]: 2026-01-21 13:45:07.80320972 +0000 UTC m=+1.616903437 container remove ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6 (image=quay.io/ceph/ceph:v20, name=relaxed_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 13:45:07 compute-0 systemd[1]: libpod-conmon-ded4cf97fe8b1e3cddd9ba6d3caf0b0997fdb12c3ecfd8ebd2a56e5f2aecb8d6.scope: Deactivated successfully.
Jan 21 13:45:07 compute-0 sudo[81315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:45:07 compute-0 sudo[81315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:07 compute-0 sudo[81315]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:07 compute-0 sudo[80988]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'progress'
Jan 21 13:45:07 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'prometheus'
Jan 21 13:45:08 compute-0 sudo[81373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taspjvefcdgxueveqxmgpedcgblxwwsp ; /usr/bin/python3'
Jan 21 13:45:08 compute-0 sudo[81373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:08 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'rbd_support'
Jan 21 13:45:08 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'rgw'
Jan 21 13:45:08 compute-0 python3[81375]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:08 compute-0 podman[81376]: 2026-01-21 13:45:08.490513223 +0000 UTC m=+0.042587448 container create 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 21 13:45:08 compute-0 systemd[1]: Started libpod-conmon-81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e.scope.
Jan 21 13:45:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:08 compute-0 podman[81376]: 2026-01-21 13:45:08.468271236 +0000 UTC m=+0.020345461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:08 compute-0 podman[81376]: 2026-01-21 13:45:08.56727286 +0000 UTC m=+0.119347105 container init 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:45:08 compute-0 podman[81376]: 2026-01-21 13:45:08.574260229 +0000 UTC m=+0.126334494 container start 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:45:08 compute-0 podman[81376]: 2026-01-21 13:45:08.578811224 +0000 UTC m=+0.130885489 container attach 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 13:45:08 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'rook'
Jan 21 13:45:08 compute-0 ceph-mon[75031]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:08 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1919572567' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 21 13:45:08 compute-0 ceph-mon[75031]: osdmap e3: 0 total, 0 up, 0 in
Jan 21 13:45:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:08 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:45:09 compute-0 sudo[81415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:09 compute-0 sudo[81415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:09 compute-0 sudo[81415]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:09 compute-0 sudo[81440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 21 13:45:09 compute-0 sudo[81440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'selftest'
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'smb'
Jan 21 13:45:09 compute-0 sudo[81440]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [cephadm INFO root] Added host compute-0
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 21 13:45:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev b4f89928-3bed-4eb1-adff-3f77fb354c0b (Updating mgr deployment (-1 -> 1))
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.dxoawe from compute-0 -- ports [8765]
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.dxoawe from compute-0 -- ports [8765]
Jan 21 13:45:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 keen_shannon[81391]: Added host 'compute-0' with addr '192.168.122.100'
Jan 21 13:45:09 compute-0 keen_shannon[81391]: Scheduled mon update...
Jan 21 13:45:09 compute-0 keen_shannon[81391]: Scheduled mgr update...
Jan 21 13:45:09 compute-0 keen_shannon[81391]: Scheduled osd.default_drive_group update...
Jan 21 13:45:09 compute-0 systemd[1]: libpod-81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e.scope: Deactivated successfully.
Jan 21 13:45:09 compute-0 podman[81376]: 2026-01-21 13:45:09.485155706 +0000 UTC m=+1.037229971 container died 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:45:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb6a5f119a51460542715af86b25ed412f143e272f771fc237316efe4490110f-merged.mount: Deactivated successfully.
Jan 21 13:45:09 compute-0 sudo[81485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:09 compute-0 sudo[81485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:09 compute-0 podman[81376]: 2026-01-21 13:45:09.538276875 +0000 UTC m=+1.090351140 container remove 81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e (image=quay.io/ceph/ceph:v20, name=keen_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 13:45:09 compute-0 sudo[81485]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:09 compute-0 systemd[1]: libpod-conmon-81fc07498a0524741c155f6a727685933bc667972e09217c6edfa2915d2f880e.scope: Deactivated successfully.
Jan 21 13:45:09 compute-0 sudo[81373]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:09 compute-0 sudo[81522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --name mgr.compute-0.dxoawe --force --tcp-ports 8765
Jan 21 13:45:09 compute-0 sudo[81522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'snap_schedule'
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'stats'
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'status'
Jan 21 13:45:09 compute-0 sudo[81582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqqvvoccvmybqddoibnspvekrwcalyts ; /usr/bin/python3'
Jan 21 13:45:09 compute-0 sudo[81582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'telegraf'
Jan 21 13:45:09 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:45:09 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'telemetry'
Jan 21 13:45:09 compute-0 python3[81584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:10 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'test_orchestrator'
Jan 21 13:45:10 compute-0 podman[81610]: 2026-01-21 13:45:10.021969041 +0000 UTC m=+0.025180851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:10 compute-0 ceph-mgr[80421]: mgr[py] Loading python module 'volumes'
Jan 21 13:45:10 compute-0 podman[81610]: 2026-01-21 13:45:10.459084092 +0000 UTC m=+0.462295892 container create 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:10 compute-0 systemd[1]: Started libpod-conmon-7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882.scope.
Jan 21 13:45:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:10 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.dxoawe started
Jan 21 13:45:10 compute-0 ceph-mgr[80421]: ms_deliver_dispatch: unhandled message 0x55e988a70000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 21 13:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from mgr.compute-0.dxoawe 192.168.122.100:0/2133553605; not ready for session (expect reconnect)
Jan 21 13:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:10 compute-0 podman[81610]: 2026-01-21 13:45:10.651204195 +0000 UTC m=+0.654416035 container init 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:45:10 compute-0 podman[81610]: 2026-01-21 13:45:10.663549001 +0000 UTC m=+0.666760771 container start 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:45:10 compute-0 podman[81610]: 2026-01-21 13:45:10.679776623 +0000 UTC m=+0.682988423 container attach 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Added host compute-0
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Saving service mon spec with placement compute-0
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Saving service mgr spec with placement compute-0
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Saving service osd.default_drive_group spec with placement compute-0
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Removing daemon mgr.compute-0.dxoawe from compute-0 -- ports [8765]
Jan 21 13:45:10 compute-0 ceph-mon[75031]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:10 compute-0 ceph-mon[75031]: Standby manager daemon compute-0.dxoawe started
Jan 21 13:45:10 compute-0 podman[81621]: 2026-01-21 13:45:10.804390343 +0000 UTC m=+0.772540712 container stop 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:10 compute-0 podman[81621]: 2026-01-21 13:45:10.836810015 +0000 UTC m=+0.804960384 container died 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb232cf63afa8d1088f0543e88d69ab50488f620288f21a2438eb9a414b63861-merged.mount: Deactivated successfully.
Jan 21 13:45:10 compute-0 podman[81621]: 2026-01-21 13:45:10.887488319 +0000 UTC m=+0.855638658 container remove 238a5a0f73c8de26b71a83838bd1a0dbc8996e94d131a74b220f994c1e766d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:10 compute-0 bash[81621]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-dxoawe
Jan 21 13:45:10 compute-0 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.dxoawe.service: Main process exited, code=exited, status=143/n/a
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:45:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:45:11 compute-0 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.dxoawe.service: Failed with result 'exit-code'.
Jan 21 13:45:11 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.dxoawe for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:45:11 compute-0 systemd[1]: ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.dxoawe.service: Consumed 7.548s CPU time, 415.3M memory peak, read 0B from disk, written 155.5K to disk.
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:11 compute-0 systemd[1]: Reloading.
Jan 21 13:45:11 compute-0 systemd-rc-local-generator[81743]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:11 compute-0 systemd-sysv-generator[81747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262478170' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:45:11 compute-0 lucid_heyrovsky[81638]: 
Jan 21 13:45:11 compute-0 lucid_heyrovsky[81638]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":50,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-21T13:44:18:859596+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-21T13:44:18.861719+0000","services":{}},"progress_events":{"b4f89928-3bed-4eb1-adff-3f77fb354c0b":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 21 13:45:11 compute-0 podman[81610]: 2026-01-21 13:45:11.306242282 +0000 UTC m=+1.309454072 container died 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 13:45:11 compute-0 systemd[1]: libpod-7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882.scope: Deactivated successfully.
Jan 21 13:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed217ad443830204a63fad09581f268b53ec2029a2dea438568c478b5437de63-merged.mount: Deactivated successfully.
Jan 21 13:45:11 compute-0 podman[81610]: 2026-01-21 13:45:11.397135004 +0000 UTC m=+1.400346774 container remove 7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882 (image=quay.io/ceph/ceph:v20, name=lucid_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:45:11 compute-0 systemd[1]: libpod-conmon-7373440d60b6fbd9e5d40921740dcd8e0e8f3da0cb5d84f1af793b273b1ad882.scope: Deactivated successfully.
Jan 21 13:45:11 compute-0 sudo[81582]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:11 compute-0 sudo[81522]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:11 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.dxoawe
Jan 21 13:45:11 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.dxoawe
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"} v 0)
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"} : dispatch
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"}]': finished
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:11 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev b4f89928-3bed-4eb1-adff-3f77fb354c0b (Updating mgr deployment (-1 -> 1))
Jan 21 13:45:11 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event b4f89928-3bed-4eb1-adff-3f77fb354c0b (Updating mgr deployment (-1 -> 1)) in 2 seconds
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.tnwklj(active, since 32s), standbys: compute-0.dxoawe
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dxoawe", "id": "compute-0.dxoawe"} v 0)
Jan 21 13:45:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.dxoawe", "id": "compute-0.dxoawe"} : dispatch
Jan 21 13:45:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:11 compute-0 sudo[81769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:45:11 compute-0 sudo[81769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:11 compute-0 sudo[81769]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:11 compute-0 sudo[81794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:11 compute-0 sudo[81794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:11 compute-0 sudo[81794]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:11 compute-0 sudo[81819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:45:11 compute-0 sudo[81819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:11 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4262478170' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:45:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"} : dispatch
Jan 21 13:45:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.dxoawe"}]': finished
Jan 21 13:45:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:11 compute-0 ceph-mon[75031]: mgrmap e9: compute-0.tnwklj(active, since 32s), standbys: compute-0.dxoawe
Jan 21 13:45:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mgr metadata", "who": "compute-0.dxoawe", "id": "compute-0.dxoawe"} : dispatch
Jan 21 13:45:12 compute-0 podman[81889]: 2026-01-21 13:45:12.088054898 +0000 UTC m=+0.058343356 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:45:12 compute-0 podman[81889]: 2026-01-21 13:45:12.221898027 +0000 UTC m=+0.192186445 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 13:45:12 compute-0 sudo[81819]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:12 compute-0 sudo[81985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:12 compute-0 sudo[81985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:12 compute-0 sudo[81985]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:12 compute-0 sudo[82010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:45:12 compute-0 sudo[82010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:12 compute-0 ceph-mon[75031]: Removing key for mgr.compute-0.dxoawe
Jan 21 13:45:12 compute-0 ceph-mon[75031]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:12 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.056872134 +0000 UTC m=+0.052677651 container create f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:13 compute-0 systemd[1]: Started libpod-conmon-f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152.scope.
Jan 21 13:45:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.127526972 +0000 UTC m=+0.123332469 container init f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.136099927 +0000 UTC m=+0.131905424 container start f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.040706287 +0000 UTC m=+0.036511784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:13 compute-0 elastic_rubin[82063]: 167 167
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.140179354 +0000 UTC m=+0.135984861 container attach f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:45:13 compute-0 systemd[1]: libpod-f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152.scope: Deactivated successfully.
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.140756589 +0000 UTC m=+0.136562076 container died f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e37374a62fb66e29dc30746a935c02ca375abe1e582da0ea051cd16bf2a89bb-merged.mount: Deactivated successfully.
Jan 21 13:45:13 compute-0 podman[82047]: 2026-01-21 13:45:13.184535515 +0000 UTC m=+0.180340992 container remove f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_rubin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:13 compute-0 systemd[1]: libpod-conmon-f01c89dd5b9114cd4d43353e1a99fd51740e01f5e4866b0b46592246fa947152.scope: Deactivated successfully.
Jan 21 13:45:13 compute-0 podman[82086]: 2026-01-21 13:45:13.321099278 +0000 UTC m=+0.036135704 container create a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:13 compute-0 systemd[1]: Started libpod-conmon-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope.
Jan 21 13:45:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:13 compute-0 podman[82086]: 2026-01-21 13:45:13.306141541 +0000 UTC m=+0.021177987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:13 compute-0 podman[82086]: 2026-01-21 13:45:13.432348698 +0000 UTC m=+0.147385154 container init a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:13 compute-0 podman[82086]: 2026-01-21 13:45:13.440758079 +0000 UTC m=+0.155794545 container start a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:45:13 compute-0 podman[82086]: 2026-01-21 13:45:13.444505368 +0000 UTC m=+0.159541904 container attach a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 13:45:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:14 compute-0 epic_jones[82102]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bb69e93d-312d-404f-89ad-65c71069da0f
Jan 21 13:45:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"} v 0)
Jan 21 13:45:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"} : dispatch
Jan 21 13:45:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 21 13:45:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"}]': finished
Jan 21 13:45:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 21 13:45:14 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 21 13:45:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:14 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:14 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:14 compute-0 ceph-mon[75031]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:14 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"} : dispatch
Jan 21 13:45:14 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/202145632' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bb69e93d-312d-404f-89ad-65c71069da0f"}]': finished
Jan 21 13:45:14 compute-0 ceph-mon[75031]: osdmap e4: 1 total, 0 up, 1 in
Jan 21 13:45:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:14 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 21 13:45:14 compute-0 lvm[82194]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:14 compute-0 lvm[82194]: VG ceph_vg0 finished
Jan 21 13:45:14 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 13:45:15 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887925425' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 13:45:15 compute-0 epic_jones[82102]:  stderr: got monmap epoch 1
Jan 21 13:45:15 compute-0 epic_jones[82102]: --> Creating keyring file for osd.0
Jan 21 13:45:15 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 21 13:45:15 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 21 13:45:15 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bb69e93d-312d-404f-89ad-65c71069da0f --setuser ceph --setgroup ceph
Jan 21 13:45:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:15 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 13:45:15 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 13:45:15 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1887925425' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 13:45:15 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 3 completed events
Jan 21 13:45:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:45:15 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:16 compute-0 epic_jones[82102]:  stderr: 2026-01-21T13:45:15.527+0000 7fb2261bf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 21 13:45:16 compute-0 epic_jones[82102]:  stderr: 2026-01-21T13:45:15.550+0000 7fb2261bf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 21 13:45:16 compute-0 epic_jones[82102]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 13:45:16 compute-0 epic_jones[82102]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 13:45:16 compute-0 epic_jones[82102]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:16 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e72716bc-fd8c-40ef-ada4-83584d595d05
Jan 21 13:45:16 compute-0 ceph-mon[75031]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:16 compute-0 ceph-mon[75031]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 21 13:45:16 compute-0 ceph-mon[75031]: Cluster is now healthy
Jan 21 13:45:16 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:16 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"} v 0)
Jan 21 13:45:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"} : dispatch
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"}]': finished
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 21 13:45:17 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:17 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:17 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:17 compute-0 lvm[83141]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:45:17 compute-0 lvm[83141]: VG ceph_vg1 finished
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 21 13:45:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:17 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"} : dispatch
Jan 21 13:45:17 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/997373637' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e72716bc-fd8c-40ef-ada4-83584d595d05"}]': finished
Jan 21 13:45:17 compute-0 ceph-mon[75031]: osdmap e5: 2 total, 0 up, 2 in
Jan 21 13:45:17 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:17 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 13:45:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830211096' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 13:45:17 compute-0 epic_jones[82102]:  stderr: got monmap epoch 1
Jan 21 13:45:17 compute-0 epic_jones[82102]: --> Creating keyring file for osd.1
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 21 13:45:17 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e72716bc-fd8c-40ef-ada4-83584d595d05 --setuser ceph --setgroup ceph
Jan 21 13:45:18 compute-0 ceph-mon[75031]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:18 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2830211096' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 13:45:18 compute-0 epic_jones[82102]:  stderr: 2026-01-21T13:45:17.931+0000 7f321848f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 21 13:45:18 compute-0 epic_jones[82102]:  stderr: 2026-01-21T13:45:17.952+0000 7f321848f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 21 13:45:18 compute-0 epic_jones[82102]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 21 13:45:18 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:18 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 13:45:18 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 13:45:19 compute-0 epic_jones[82102]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 13:45:19 compute-0 epic_jones[82102]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8d905f10-e78d-4894-96b3-7b33a725e1b7
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"} v 0)
Jan 21 13:45:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"} : dispatch
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"}]': finished
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 21 13:45:19 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:19 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:19 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:19 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:19 compute-0 lvm[84088]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:45:19 compute-0 lvm[84088]: VG ceph_vg2 finished
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:19 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 21 13:45:19 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"} : dispatch
Jan 21 13:45:19 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/259288800' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8d905f10-e78d-4894-96b3-7b33a725e1b7"}]': finished
Jan 21 13:45:19 compute-0 ceph-mon[75031]: osdmap e6: 3 total, 0 up, 3 in
Jan 21 13:45:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 21 13:45:20 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103963103' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 13:45:20 compute-0 epic_jones[82102]:  stderr: got monmap epoch 1
Jan 21 13:45:20 compute-0 epic_jones[82102]: --> Creating keyring file for osd.2
Jan 21 13:45:20 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 21 13:45:20 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 21 13:45:20 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 8d905f10-e78d-4894-96b3-7b33a725e1b7 --setuser ceph --setgroup ceph
Jan 21 13:45:20 compute-0 ceph-mon[75031]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:20 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2103963103' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 21 13:45:20 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:21 compute-0 epic_jones[82102]:  stderr: 2026-01-21T13:45:20.357+0000 7f5fd2dcc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 21 13:45:21 compute-0 epic_jones[82102]:  stderr: 2026-01-21T13:45:20.380+0000 7f5fd2dcc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 21 13:45:21 compute-0 epic_jones[82102]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 21 13:45:21 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 13:45:21 compute-0 epic_jones[82102]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 21 13:45:21 compute-0 epic_jones[82102]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:21 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:21 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 13:45:21 compute-0 epic_jones[82102]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 13:45:21 compute-0 epic_jones[82102]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 21 13:45:21 compute-0 epic_jones[82102]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 21 13:45:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:21 compute-0 systemd[1]: libpod-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope: Deactivated successfully.
Jan 21 13:45:21 compute-0 systemd[1]: libpod-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope: Consumed 6.459s CPU time.
Jan 21 13:45:21 compute-0 podman[85003]: 2026-01-21 13:45:21.578370874 +0000 UTC m=+0.031879043 container died a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 13:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e607a60d031e0e02ba16402930e8cd8cef78c88bfdb300279d49ca126e3fdac-merged.mount: Deactivated successfully.
Jan 21 13:45:21 compute-0 podman[85003]: 2026-01-21 13:45:21.629435134 +0000 UTC m=+0.082943223 container remove a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:45:21 compute-0 systemd[1]: libpod-conmon-a7d533fab8177c265b6a9c265052cd715de2b800f4a468b849e9aab853519879.scope: Deactivated successfully.
Jan 21 13:45:21 compute-0 sudo[82010]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:21 compute-0 sudo[85019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:21 compute-0 sudo[85019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:21 compute-0 sudo[85019]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:21 compute-0 sudo[85044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:45:21 compute-0 sudo[85044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.107710166 +0000 UTC m=+0.054041743 container create 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:22 compute-0 systemd[1]: Started libpod-conmon-1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12.scope.
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.081887448 +0000 UTC m=+0.028219125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.218507974 +0000 UTC m=+0.164839601 container init 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.231080654 +0000 UTC m=+0.177412251 container start 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.236056254 +0000 UTC m=+0.182387941 container attach 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 21 13:45:22 compute-0 keen_stonebraker[85098]: 167 167
Jan 21 13:45:22 compute-0 systemd[1]: libpod-1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12.scope: Deactivated successfully.
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.238239785 +0000 UTC m=+0.184571412 container died 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-52e1114c46edaebe990d7c5a712660a9befb4a475dc9e26d83f9cecf20da17e7-merged.mount: Deactivated successfully.
Jan 21 13:45:22 compute-0 podman[85081]: 2026-01-21 13:45:22.277060393 +0000 UTC m=+0.223391970 container remove 1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 13:45:22 compute-0 systemd[1]: libpod-conmon-1fd568aec2d10658922c9b84866d8393659fbe6741a4f8893a03ecb712294c12.scope: Deactivated successfully.
Jan 21 13:45:22 compute-0 podman[85122]: 2026-01-21 13:45:22.488351034 +0000 UTC m=+0.052480456 container create c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 13:45:22 compute-0 systemd[1]: Started libpod-conmon-c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f.scope.
Jan 21 13:45:22 compute-0 podman[85122]: 2026-01-21 13:45:22.466735056 +0000 UTC m=+0.030864508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:22 compute-0 podman[85122]: 2026-01-21 13:45:22.601029876 +0000 UTC m=+0.165159378 container init c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 13:45:22 compute-0 podman[85122]: 2026-01-21 13:45:22.618334 +0000 UTC m=+0.182463452 container start c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:22 compute-0 podman[85122]: 2026-01-21 13:45:22.623083924 +0000 UTC m=+0.187213366 container attach c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:22 compute-0 ceph-mon[75031]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:22 compute-0 stoic_jang[85139]: {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:     "0": [
Jan 21 13:45:22 compute-0 stoic_jang[85139]:         {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "devices": [
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "/dev/loop3"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             ],
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_name": "ceph_lv0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_size": "21470642176",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "name": "ceph_lv0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "tags": {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cluster_name": "ceph",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.crush_device_class": "",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.encrypted": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.objectstore": "bluestore",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osd_id": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.type": "block",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.vdo": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.with_tpm": "0"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             },
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "type": "block",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "vg_name": "ceph_vg0"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:         }
Jan 21 13:45:22 compute-0 stoic_jang[85139]:     ],
Jan 21 13:45:22 compute-0 stoic_jang[85139]:     "1": [
Jan 21 13:45:22 compute-0 stoic_jang[85139]:         {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "devices": [
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "/dev/loop4"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             ],
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_name": "ceph_lv1",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_size": "21470642176",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "name": "ceph_lv1",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "tags": {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cluster_name": "ceph",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.crush_device_class": "",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.encrypted": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.objectstore": "bluestore",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osd_id": "1",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.type": "block",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.vdo": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.with_tpm": "0"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             },
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "type": "block",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "vg_name": "ceph_vg1"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:         }
Jan 21 13:45:22 compute-0 stoic_jang[85139]:     ],
Jan 21 13:45:22 compute-0 stoic_jang[85139]:     "2": [
Jan 21 13:45:22 compute-0 stoic_jang[85139]:         {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "devices": [
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "/dev/loop5"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             ],
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_name": "ceph_lv2",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_size": "21470642176",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "name": "ceph_lv2",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "tags": {
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.cluster_name": "ceph",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.crush_device_class": "",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.encrypted": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.objectstore": "bluestore",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osd_id": "2",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.type": "block",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.vdo": "0",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:                 "ceph.with_tpm": "0"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             },
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "type": "block",
Jan 21 13:45:22 compute-0 stoic_jang[85139]:             "vg_name": "ceph_vg2"
Jan 21 13:45:22 compute-0 stoic_jang[85139]:         }
Jan 21 13:45:22 compute-0 stoic_jang[85139]:     ]
Jan 21 13:45:22 compute-0 stoic_jang[85139]: }
Jan 21 13:45:22 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:22 compute-0 systemd[1]: libpod-c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f.scope: Deactivated successfully.
Jan 21 13:45:22 compute-0 podman[85122]: 2026-01-21 13:45:22.972772612 +0000 UTC m=+0.536902064 container died c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a734413d9cba028c367f49b9027c9f538b514b01ff95945ea4c978a746e15fdc-merged.mount: Deactivated successfully.
Jan 21 13:45:23 compute-0 podman[85122]: 2026-01-21 13:45:23.033066653 +0000 UTC m=+0.597196065 container remove c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:23 compute-0 systemd[1]: libpod-conmon-c7a84a9b8bd21f55e07e176955dc740149f5db271223432fad7ca74a0ffe6f9f.scope: Deactivated successfully.
Jan 21 13:45:23 compute-0 sudo[85044]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 21 13:45:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 21 13:45:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:23 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 21 13:45:23 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 21 13:45:23 compute-0 sudo[85159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:23 compute-0 sudo[85159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:23 compute-0 sudo[85159]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:23 compute-0 sudo[85184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:23 compute-0 sudo[85184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.694937461 +0000 UTC m=+0.046608704 container create 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:23 compute-0 systemd[1]: Started libpod-conmon-20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666.scope.
Jan 21 13:45:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.67435951 +0000 UTC m=+0.026030783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.773579592 +0000 UTC m=+0.125250865 container init 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.783818666 +0000 UTC m=+0.135489919 container start 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:45:23 compute-0 zealous_lederberg[85265]: 167 167
Jan 21 13:45:23 compute-0 systemd[1]: libpod-20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666.scope: Deactivated successfully.
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.788645301 +0000 UTC m=+0.140316554 container attach 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.790312282 +0000 UTC m=+0.141983545 container died 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8817ace1e34d7e611acac82f335932dd736780e627865feb9faa37b60dde83e5-merged.mount: Deactivated successfully.
Jan 21 13:45:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 21 13:45:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:23 compute-0 ceph-mon[75031]: Deploying daemon osd.0 on compute-0
Jan 21 13:45:23 compute-0 podman[85248]: 2026-01-21 13:45:23.840457639 +0000 UTC m=+0.192128872 container remove 20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lederberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:23 compute-0 systemd[1]: libpod-conmon-20c80ea48a9efa7620bb5fc0ba5b70c7ad03b83b22b49599d6b78e8eeb770666.scope: Deactivated successfully.
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.116790945 +0000 UTC m=+0.041702508 container create 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:24 compute-0 systemd[1]: Started libpod-conmon-08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237.scope.
Jan 21 13:45:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.195960726 +0000 UTC m=+0.120872349 container init 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.101701614 +0000 UTC m=+0.026613177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.209518571 +0000 UTC m=+0.134430114 container start 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.214049129 +0000 UTC m=+0.138960722 container attach 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test[85309]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 13:45:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test[85309]:                             [--no-systemd] [--no-tmpfs]
Jan 21 13:45:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test[85309]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 13:45:24 compute-0 systemd[1]: libpod-08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237.scope: Deactivated successfully.
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.407044821 +0000 UTC m=+0.331956364 container died 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 13:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb571c80133b832773cb52c9c50ae18628ad797e8198d24bd3be08901b882c84-merged.mount: Deactivated successfully.
Jan 21 13:45:24 compute-0 podman[85293]: 2026-01-21 13:45:24.450232744 +0000 UTC m=+0.375144287 container remove 08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:24 compute-0 systemd[1]: libpod-conmon-08c4379ccaf04868be61e290524c29185fd808ee6d3bbbccfffccfcb33088237.scope: Deactivated successfully.
Jan 21 13:45:24 compute-0 systemd[1]: Reloading.
Jan 21 13:45:24 compute-0 systemd-rc-local-generator[85372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:24 compute-0 ceph-mon[75031]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:24 compute-0 systemd-sysv-generator[85376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:24 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:25 compute-0 systemd[1]: Reloading.
Jan 21 13:45:25 compute-0 systemd-rc-local-generator[85408]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:25 compute-0 systemd-sysv-generator[85411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:25 compute-0 systemd[1]: Starting Ceph osd.0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:45:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:25 compute-0 podman[85468]: 2026-01-21 13:45:25.692628938 +0000 UTC m=+0.060925817 container create 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:25 compute-0 podman[85468]: 2026-01-21 13:45:25.666503204 +0000 UTC m=+0.034800163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:25 compute-0 podman[85468]: 2026-01-21 13:45:25.771547205 +0000 UTC m=+0.139844194 container init 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:45:25 compute-0 podman[85468]: 2026-01-21 13:45:25.788504 +0000 UTC m=+0.156800879 container start 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:25 compute-0 podman[85468]: 2026-01-21 13:45:25.792837403 +0000 UTC m=+0.161134362 container attach 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:25 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:25 compute-0 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:26 compute-0 lvm[85570]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:26 compute-0 lvm[85569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:45:26 compute-0 lvm[85569]: VG ceph_vg1 finished
Jan 21 13:45:26 compute-0 lvm[85570]: VG ceph_vg0 finished
Jan 21 13:45:26 compute-0 lvm[85572]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:45:26 compute-0 lvm[85572]: VG ceph_vg2 finished
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 13:45:26 compute-0 bash[85468]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 21 13:45:26 compute-0 ceph-mon[75031]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 13:45:26 compute-0 bash[85468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 21 13:45:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate[85484]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 13:45:26 compute-0 bash[85468]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 21 13:45:26 compute-0 systemd[1]: libpod-7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd.scope: Deactivated successfully.
Jan 21 13:45:26 compute-0 podman[85468]: 2026-01-21 13:45:26.944179161 +0000 UTC m=+1.312476050 container died 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:26 compute-0 systemd[1]: libpod-7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd.scope: Consumed 1.602s CPU time.
Jan 21 13:45:26 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48642f45d7ded7b140b0978cdd802db088c1975e13e327352f3947fc0d00724-merged.mount: Deactivated successfully.
Jan 21 13:45:26 compute-0 podman[85468]: 2026-01-21 13:45:26.99013873 +0000 UTC m=+1.358435609 container remove 7a7b70944d3dfe2b215d72a0a13af1be22cd91e40833bc66ba37d156e1087afd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:45:27 compute-0 podman[85721]: 2026-01-21 13:45:27.166251639 +0000 UTC m=+0.039420743 container create 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41e4d082bdd3b9695ce1c7daa59f5376fd2fbc82fc1283c2a9888a3bd6eb12/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:27 compute-0 podman[85721]: 2026-01-21 13:45:27.147535291 +0000 UTC m=+0.020704385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:27 compute-0 podman[85721]: 2026-01-21 13:45:27.248036983 +0000 UTC m=+0.121206067 container init 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:27 compute-0 podman[85721]: 2026-01-21 13:45:27.258991485 +0000 UTC m=+0.132160559 container start 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:27 compute-0 bash[85721]: 534fa4fe41482b2f0b6a4ea9687ef5d59a9f50942c275fca1e5f7b80f4698ff5
Jan 21 13:45:27 compute-0 systemd[1]: Started Ceph osd.0 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:45:27 compute-0 ceph-osd[85740]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: pidfile_write: ignore empty --pid-file
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 sudo[85184]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 21 13:45:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 21 13:45:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:27 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:27 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 21 13:45:27 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 sudo[85755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:27 compute-0 sudo[85755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:27 compute-0 sudo[85755]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0400 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea0000 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 sudo[85786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:27 compute-0 sudo[85786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 21 13:45:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:27 compute-0 ceph-osd[85740]: load: jerasure load: lrc 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eecea1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount shared_bdev_used = 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Git sha 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DB SUMMARY
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DB Session ID:  TDEQH9BGDEPQOYZNFORS
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                     Options.env: 0x557eecd31ea0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                Options.info_log: 0x557eedd8c8a0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.write_buffer_manager: 0x557eedc32b40
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.row_cache: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                              Options.wal_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.wal_compression: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_background_jobs: 4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Compression algorithms supported:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kZSTD supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd35a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd35a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd35a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 913d458a-c02d-4bc9-b6ba-f790bdbfb0ef
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127690333, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127692308, "job": 1, "event": "recovery_finished"}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: freelist init
Jan 21 13:45:27 compute-0 ceph-osd[85740]: freelist _read_cfg
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs umount
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) close
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bdev(0x557eedb41800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluefs mount shared_bdev_used = 27262976
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Git sha 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DB SUMMARY
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DB Session ID:  TDEQH9BGDEPQOYZNFORT
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                     Options.env: 0x557eecd31ce0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                Options.info_log: 0x557eedddd760
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.write_buffer_manager: 0x557eedc32b40
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.row_cache: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                              Options.wal_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.wal_compression: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_background_jobs: 4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Compression algorithms supported:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kZSTD supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd358d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8d0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd35a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8d0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd35a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557eedd8d0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557eecd35a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 913d458a-c02d-4bc9-b6ba-f790bdbfb0ef
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127745117, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127749999, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "913d458a-c02d-4bc9-b6ba-f790bdbfb0ef", "db_session_id": "TDEQH9BGDEPQOYZNFORT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127753026, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "913d458a-c02d-4bc9-b6ba-f790bdbfb0ef", "db_session_id": "TDEQH9BGDEPQOYZNFORT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127756033, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "913d458a-c02d-4bc9-b6ba-f790bdbfb0ef", "db_session_id": "TDEQH9BGDEPQOYZNFORT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003127757495, "job": 1, "event": "recovery_finished"}
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557eedd8e000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: DB pointer 0x557eedf46000
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 21 13:45:27 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:45:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 13:45:27 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 13:45:27 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 13:45:27 compute-0 ceph-osd[85740]: _get_class not permitted to load lua
Jan 21 13:45:27 compute-0 ceph-osd[85740]: _get_class not permitted to load sdk
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 load_pgs
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 load_pgs opened 0 pgs
Jan 21 13:45:27 compute-0 ceph-osd[85740]: osd.0 0 log_to_monitors true
Jan 21 13:45:27 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0[85736]: 2026-01-21T13:45:27.787+0000 7f7953eb78c0 -1 osd.0 0 log_to_monitors true
Jan 21 13:45:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 21 13:45:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 21 13:45:27 compute-0 podman[86288]: 2026-01-21 13:45:27.914481301 +0000 UTC m=+0.040852157 container create ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:27 compute-0 systemd[1]: Started libpod-conmon-ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd.scope.
Jan 21 13:45:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:27 compute-0 podman[86288]: 2026-01-21 13:45:27.993318246 +0000 UTC m=+0.119689132 container init ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:45:27 compute-0 podman[86288]: 2026-01-21 13:45:27.898447829 +0000 UTC m=+0.024818715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:27 compute-0 podman[86288]: 2026-01-21 13:45:27.999368581 +0000 UTC m=+0.125739477 container start ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:28 compute-0 exciting_haibt[86305]: 167 167
Jan 21 13:45:28 compute-0 systemd[1]: libpod-ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd.scope: Deactivated successfully.
Jan 21 13:45:28 compute-0 podman[86288]: 2026-01-21 13:45:28.00478358 +0000 UTC m=+0.131154466 container attach ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:28 compute-0 podman[86288]: 2026-01-21 13:45:28.005023286 +0000 UTC m=+0.131394162 container died ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c032c07956f5fd9cb2913e23a301b9a117baadbe0fb73f38b1e55f7faa97fe84-merged.mount: Deactivated successfully.
Jan 21 13:45:28 compute-0 podman[86288]: 2026-01-21 13:45:28.044655353 +0000 UTC m=+0.171026209 container remove ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 21 13:45:28 compute-0 systemd[1]: libpod-conmon-ecb0eaecbde77bc699c165fb84e95f8598c8e074289f32d2051f484263b538cd.scope: Deactivated successfully.
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.332506993 +0000 UTC m=+0.062508685 container create d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 13:45:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 21 13:45:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:28 compute-0 ceph-mon[75031]: Deploying daemon osd.1 on compute-0
Jan 21 13:45:28 compute-0 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 21 13:45:28 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 13:45:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:28 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:28 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:28 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:28 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:28 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:28 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:28 compute-0 systemd[1]: Started libpod-conmon-d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631.scope.
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.306695736 +0000 UTC m=+0.036697468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.447079812 +0000 UTC m=+0.177081504 container init d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.458338241 +0000 UTC m=+0.188339893 container start d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.461826894 +0000 UTC m=+0.191828636 container attach d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:45:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test[86350]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 13:45:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test[86350]:                             [--no-systemd] [--no-tmpfs]
Jan 21 13:45:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test[86350]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 13:45:28 compute-0 systemd[1]: libpod-d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631.scope: Deactivated successfully.
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.655455641 +0000 UTC m=+0.385457333 container died d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca149f483a8461747ebab750b8c3c2154bd15f6e579428396f29dc99179d6cdd-merged.mount: Deactivated successfully.
Jan 21 13:45:28 compute-0 podman[86334]: 2026-01-21 13:45:28.713157741 +0000 UTC m=+0.443159413 container remove d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate-test, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:45:28 compute-0 systemd[1]: libpod-conmon-d68f50a3287c6f568a63d100899646dd4fc13c6f4601682930fb07e167ca1631.scope: Deactivated successfully.
Jan 21 13:45:28 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 13:45:28 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 13:45:28 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:29 compute-0 systemd[1]: Reloading.
Jan 21 13:45:29 compute-0 systemd-rc-local-generator[86408]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:29 compute-0 systemd-sysv-generator[86411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:29 compute-0 systemd[1]: Reloading.
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0 done with init, starting boot process
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0 start_boot
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 13:45:29 compute-0 ceph-osd[85740]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 21 13:45:29 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:29 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:29 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:29 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:29 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 13:45:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:29 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:29 compute-0 ceph-mon[75031]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:29 compute-0 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 21 13:45:29 compute-0 ceph-mon[75031]: osdmap e7: 3 total, 0 up, 3 in
Jan 21 13:45:29 compute-0 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 13:45:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:29 compute-0 systemd-rc-local-generator[86454]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:29 compute-0 systemd-sysv-generator[86458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:29 compute-0 systemd[1]: Starting Ceph osd.1 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:45:29 compute-0 podman[86510]: 2026-01-21 13:45:29.830272211 +0000 UTC m=+0.059730459 container create 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:45:29 compute-0 podman[86510]: 2026-01-21 13:45:29.796214647 +0000 UTC m=+0.025672905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:29 compute-0 podman[86510]: 2026-01-21 13:45:29.974263442 +0000 UTC m=+0.203721750 container init 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:45:29 compute-0 podman[86510]: 2026-01-21 13:45:29.985584573 +0000 UTC m=+0.215042801 container start 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:30 compute-0 podman[86510]: 2026-01-21 13:45:30.006342249 +0000 UTC m=+0.235800507 container attach 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 13:45:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:30 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:30 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:30 compute-0 ceph-mon[75031]: from='osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 13:45:30 compute-0 ceph-mon[75031]: osdmap e8: 3 total, 0 up, 3 in
Jan 21 13:45:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:30 compute-0 ceph-mon[75031]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:30 compute-0 lvm[86612]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:45:30 compute-0 lvm[86612]: VG ceph_vg1 finished
Jan 21 13:45:30 compute-0 lvm[86611]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:30 compute-0 lvm[86611]: VG ceph_vg0 finished
Jan 21 13:45:30 compute-0 lvm[86614]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:45:30 compute-0 lvm[86614]: VG ceph_vg2 finished
Jan 21 13:45:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 13:45:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 bash[86510]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 13:45:30 compute-0 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 bash[86510]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:30 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 13:45:31 compute-0 bash[86510]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 13:45:31 compute-0 bash[86510]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 bash[86510]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 bash[86510]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 13:45:31 compute-0 bash[86510]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 13:45:31 compute-0 bash[86510]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 21 13:45:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate[86525]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 13:45:31 compute-0 bash[86510]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 21 13:45:31 compute-0 systemd[1]: libpod-473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5.scope: Deactivated successfully.
Jan 21 13:45:31 compute-0 systemd[1]: libpod-473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5.scope: Consumed 1.652s CPU time.
Jan 21 13:45:31 compute-0 podman[86717]: 2026-01-21 13:45:31.299843405 +0000 UTC m=+0.047158118 container died 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:45:31 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 13:45:31 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e88297438cb4fb24876e08fa36ff51e955f38bf602ed41e7437a1169d9084c6-merged.mount: Deactivated successfully.
Jan 21 13:45:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:31 compute-0 podman[86717]: 2026-01-21 13:45:31.45404486 +0000 UTC m=+0.201359573 container remove 473f0f5116a778aec2bab28b19d1581a3749991753eebd711f7d81654b980ce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 13:45:31 compute-0 ceph-mon[75031]: purged_snaps scrub starts
Jan 21 13:45:31 compute-0 ceph-mon[75031]: purged_snaps scrub ok
Jan 21 13:45:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:31 compute-0 podman[86776]: 2026-01-21 13:45:31.733194393 +0000 UTC m=+0.044190858 container create 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74a339daec7eb078add16a2d9c45bf7ca64a4e835121c5db2c2bcb888ea8679/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:31 compute-0 podman[86776]: 2026-01-21 13:45:31.711002042 +0000 UTC m=+0.021998567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:31 compute-0 podman[86776]: 2026-01-21 13:45:31.827484066 +0000 UTC m=+0.138480531 container init 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:45:31 compute-0 podman[86776]: 2026-01-21 13:45:31.83772351 +0000 UTC m=+0.148719975 container start 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 13:45:31 compute-0 bash[86776]: 75f58788bd5e57ff46589e9f1af96c16843986114eb397264a3d93ae1812e893
Jan 21 13:45:31 compute-0 systemd[1]: Started Ceph osd.1 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:45:31 compute-0 ceph-osd[86795]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:45:31 compute-0 ceph-osd[86795]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 21 13:45:31 compute-0 ceph-osd[86795]: pidfile_write: ignore empty --pid-file
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:31 compute-0 sudo[85786]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:31 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e400 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193e000 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 21 13:45:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 21 13:45:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:32 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:32 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 21 13:45:32 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 21 13:45:32 compute-0 ceph-osd[86795]: load: jerasure load: lrc 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 sudo[86819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:32 compute-0 sudo[86819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:32 compute-0 sudo[86819]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:32 compute-0 ceph-osd[86795]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 sudo[86864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:45:32 compute-0 sudo[86864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x56235193fc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount shared_bdev_used = 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Git sha 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DB SUMMARY
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DB Session ID:  VBALY7Y4KVO2SNNGS5VC
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                     Options.env: 0x5623517cfea0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                Options.info_log: 0x5623528608a0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.write_buffer_manager: 0x562352706b40
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.row_cache: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                              Options.wal_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.wal_compression: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_background_jobs: 4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Compression algorithms supported:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kZSTD supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d58d205c-2573-48b5-a4ae-6f3ea37ef9cd
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132265992, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132267913, "job": 1, "event": "recovery_finished"}
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: freelist init
Jan 21 13:45:32 compute-0 ceph-osd[86795]: freelist _read_cfg
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs umount
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) close
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bdev(0x5623525df800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluefs mount shared_bdev_used = 27262976
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Git sha 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DB SUMMARY
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DB Session ID:  VBALY7Y4KVO2SNNGS5VD
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                     Options.env: 0x5623517cfce0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                Options.info_log: 0x562352860960
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.write_buffer_manager: 0x562352706b40
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.row_cache: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                              Options.wal_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.wal_compression: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_background_jobs: 4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Compression algorithms supported:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kZSTD supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562352860bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d38d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5623528610c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5623528610c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5623528610c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5623517d3a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d58d205c-2573-48b5-a4ae-6f3ea37ef9cd
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132326566, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132335433, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003132, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d58d205c-2573-48b5-a4ae-6f3ea37ef9cd", "db_session_id": "VBALY7Y4KVO2SNNGS5VD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:32 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 13:45:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:32 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:32 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132368803, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003132, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d58d205c-2573-48b5-a4ae-6f3ea37ef9cd", "db_session_id": "VBALY7Y4KVO2SNNGS5VD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132374844, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003132, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d58d205c-2573-48b5-a4ae-6f3ea37ef9cd", "db_session_id": "VBALY7Y4KVO2SNNGS5VD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003132407625, "job": 1, "event": "recovery_finished"}
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562352886000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: DB pointer 0x562352a1a000
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 21 13:45:32 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:45:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 13:45:32 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 13:45:32 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 13:45:32 compute-0 ceph-osd[86795]: _get_class not permitted to load lua
Jan 21 13:45:32 compute-0 ceph-osd[86795]: _get_class not permitted to load sdk
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 load_pgs
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 load_pgs opened 0 pgs
Jan 21 13:45:32 compute-0 ceph-osd[86795]: osd.1 0 log_to_monitors true
Jan 21 13:45:32 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1[86791]: 2026-01-21T13:45:32.561+0000 7f09ad8058c0 -1 osd.1 0 log_to_monitors true
Jan 21 13:45:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 21 13:45:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.672129324 +0000 UTC m=+0.063696353 container create e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.62804331 +0000 UTC m=+0.019610359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:32 compute-0 systemd[1]: Started libpod-conmon-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope.
Jan 21 13:45:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.791973318 +0000 UTC m=+0.183540427 container init e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.805283336 +0000 UTC m=+0.196850365 container start e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 13:45:32 compute-0 heuristic_bhaskara[87350]: 167 167
Jan 21 13:45:32 compute-0 systemd[1]: libpod-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope: Deactivated successfully.
Jan 21 13:45:32 compute-0 conmon[87350]: conmon e1dde2ca6305377843c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope/container/memory.events
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.814922996 +0000 UTC m=+0.206490025 container attach e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.815710176 +0000 UTC m=+0.207277235 container died e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d73f4af2fb1f88925249c286c1dbedb3feca8ced2e398f9d9f129ae30ab91b95-merged.mount: Deactivated successfully.
Jan 21 13:45:32 compute-0 podman[87334]: 2026-01-21 13:45:32.940100269 +0000 UTC m=+0.331667308 container remove e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:32 compute-0 systemd[1]: libpod-conmon-e1dde2ca6305377843c092d6fcc9ad49242697e94c64164906570de1b88c9bc8.scope: Deactivated successfully.
Jan 21 13:45:32 compute-0 ceph-mgr[75322]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 21 13:45:33 compute-0 ceph-mon[75031]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 21 13:45:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:33 compute-0 ceph-mon[75031]: Deploying daemon osd.2 on compute-0
Jan 21 13:45:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:33 compute-0 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.204873747 +0000 UTC m=+0.059515634 container create a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:33 compute-0 systemd[1]: Started libpod-conmon-a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358.scope.
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.166684954 +0000 UTC m=+0.021326871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.319682401 +0000 UTC m=+0.174324308 container init a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.326610336 +0000 UTC m=+0.181252223 container start a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.336063222 +0000 UTC m=+0.190705109 container attach a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:33 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3094763527; not ready for session (expect reconnect)
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:33 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.383 iops: 6753.955 elapsed_sec: 0.444
Jan 21 13:45:33 compute-0 ceph-osd[85740]: log_channel(cluster) log [WRN] : OSD bench result of 6753.955447 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 13:45:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0[85736]: 2026-01-21T13:45:33.399+0000 7f795064b640 -1 osd.0 0 waiting for initial osdmap
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 0 waiting for initial osdmap
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 13:45:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-0[85736]: 2026-01-21T13:45:33.421+0000 7f794ac3e640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 set_numa_affinity not setting numa affinity
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 21 13:45:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test[87396]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 21 13:45:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test[87396]:                             [--no-systemd] [--no-tmpfs]
Jan 21 13:45:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test[87396]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 21 13:45:33 compute-0 systemd[1]: libpod-a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358.scope: Deactivated successfully.
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.513078603 +0000 UTC m=+0.367720490 container died a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:45:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527] boot
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Jan 21 13:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e85ab24cde2f97cdba69e044b30363682d6d780ced24c8b1117291df18d77b1-merged.mount: Deactivated successfully.
Jan 21 13:45:33 compute-0 ceph-osd[85740]: osd.0 9 state: booting -> active
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:33 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:33 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:33 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:33 compute-0 podman[87381]: 2026-01-21 13:45:33.550416825 +0000 UTC m=+0.405058712 container remove a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 13:45:33 compute-0 systemd[1]: libpod-conmon-a5ff70f4e3e88ccf21eba8e93353cc0ed9478af581b41a7960ecb4de5fab4358.scope: Deactivated successfully.
Jan 21 13:45:33 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 13:45:33 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 13:45:33 compute-0 systemd[1]: Reloading.
Jan 21 13:45:33 compute-0 systemd-sysv-generator[87462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:33 compute-0 systemd-rc-local-generator[87458]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:34 compute-0 ceph-mon[75031]: OSD bench result of 6753.955447 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 13:45:34 compute-0 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 21 13:45:34 compute-0 ceph-mon[75031]: osd.0 [v2:192.168.122.100:6802/3094763527,v1:192.168.122.100:6803/3094763527] boot
Jan 21 13:45:34 compute-0 ceph-mon[75031]: osdmap e9: 3 total, 1 up, 3 in
Jan 21 13:45:34 compute-0 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 13:45:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 21 13:45:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:34 compute-0 systemd[1]: Reloading.
Jan 21 13:45:34 compute-0 systemd-rc-local-generator[87502]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:45:34 compute-0 systemd-sysv-generator[87505]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:45:34 compute-0 systemd[1]: Starting Ceph osd.2 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0 done with init, starting boot process
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0 start_boot
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 13:45:34 compute-0 ceph-osd[86795]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 21 13:45:34 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:34 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:34 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:34 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:34 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:34 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:34 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:34 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:34 compute-0 podman[87558]: 2026-01-21 13:45:34.629341192 +0000 UTC m=+0.053564201 container create b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 21 13:45:34 compute-0 podman[87558]: 2026-01-21 13:45:34.599294545 +0000 UTC m=+0.023517584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:34 compute-0 podman[87558]: 2026-01-21 13:45:34.76523141 +0000 UTC m=+0.189454449 container init b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:34 compute-0 podman[87558]: 2026-01-21 13:45:34.77568242 +0000 UTC m=+0.199905439 container start b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:34 compute-0 podman[87558]: 2026-01-21 13:45:34.793861204 +0000 UTC m=+0.218084323 container attach b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 13:45:34 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:34 compute-0 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:34 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:34 compute-0 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:34 compute-0 ceph-mgr[75322]: [devicehealth INFO root] creating mgr pool
Jan 21 13:45:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 21 13:45:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 21 13:45:35 compute-0 ceph-mon[75031]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 21 13:45:35 compute-0 ceph-mon[75031]: from='osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 13:45:35 compute-0 ceph-mon[75031]: osdmap e10: 3 total, 1 up, 3 in
Jan 21 13:45:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 21 13:45:35 compute-0 lvm[87662]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:45:35 compute-0 lvm[87662]: VG ceph_vg1 finished
Jan 21 13:45:35 compute-0 lvm[87661]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:35 compute-0 lvm[87661]: VG ceph_vg0 finished
Jan 21 13:45:35 compute-0 lvm[87664]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:45:35 compute-0 lvm[87664]: VG ceph_vg2 finished
Jan 21 13:45:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 21 13:45:35 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:35 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 21 13:45:35 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:35 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:35 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 21 13:45:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 21 13:45:35 compute-0 ceph-osd[85740]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 13:45:35 compute-0 ceph-osd[85740]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 21 13:45:35 compute-0 ceph-osd[85740]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:35 compute-0 bash[87558]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 13:45:35 compute-0 bash[87558]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 21 13:45:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate[87574]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 21 13:45:35 compute-0 bash[87558]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 21 13:45:35 compute-0 systemd[1]: libpod-b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04.scope: Deactivated successfully.
Jan 21 13:45:35 compute-0 systemd[1]: libpod-b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04.scope: Consumed 1.481s CPU time.
Jan 21 13:45:35 compute-0 podman[87558]: 2026-01-21 13:45:35.879177185 +0000 UTC m=+1.303400194 container died b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-68ee551e8d3814b53f3d5700e9a037ab27cff0e9080b78f401f2535506b0d649-merged.mount: Deactivated successfully.
Jan 21 13:45:36 compute-0 podman[87558]: 2026-01-21 13:45:36.003617099 +0000 UTC m=+1.427840128 container remove b8990fd92ca3be68560339836ee6e19a602be5abfa83ea2766c959e2f467cb04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:36 compute-0 ceph-mon[75031]: purged_snaps scrub starts
Jan 21 13:45:36 compute-0 ceph-mon[75031]: purged_snaps scrub ok
Jan 21 13:45:36 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:36 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 21 13:45:36 compute-0 ceph-mon[75031]: osdmap e11: 3 total, 1 up, 3 in
Jan 21 13:45:36 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:36 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:36 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:36 compute-0 podman[87823]: 2026-01-21 13:45:36.245404597 +0000 UTC m=+0.070887044 container create 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 13:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e72ca374161b68879359b3ee3ec8d2418551af35c37f02b32bc6be3fa7b7fbb/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:36 compute-0 podman[87823]: 2026-01-21 13:45:36.202670456 +0000 UTC m=+0.028152903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:36 compute-0 podman[87823]: 2026-01-21 13:45:36.329972639 +0000 UTC m=+0.155455126 container init 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:45:36 compute-0 podman[87823]: 2026-01-21 13:45:36.335914901 +0000 UTC m=+0.161397338 container start 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:36 compute-0 bash[87823]: 391c65d49d06996033f966187742c0fd8d42ad35a268091a77911a32009e3e7a
Jan 21 13:45:36 compute-0 systemd[1]: Started Ceph osd.2 for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:45:36 compute-0 ceph-osd[87843]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: pidfile_write: ignore empty --pid-file
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 sudo[86864]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12400 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:36 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe12000 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 21 13:45:36 compute-0 ceph-osd[87843]: load: jerasure load: lrc 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:36 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:36 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:36 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:36 compute-0 sudo[87867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 sudo[87867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 sudo[87867]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:36 compute-0 ceph-osd[87843]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 21 13:45:36 compute-0 ceph-osd[87843]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 sudo[87903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:45:36 compute-0 sudo[87903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x55794fe13c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount shared_bdev_used = 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Git sha 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: DB SUMMARY
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: DB Session ID:  1BG515NGS1LBB8AUUWF8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                     Options.env: 0x55794fca3ea0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                Options.info_log: 0x557950cfe8a0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.write_buffer_manager: 0x557950ba4b40
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.row_cache: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                              Options.wal_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.wal_compression: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_background_jobs: 4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Compression algorithms supported:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kZSTD supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950cfec80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d483eaa3-2246-48d8-b690-0a189d5aa6bb
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136777611, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136779831, "job": 1, "event": "recovery_finished"}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: freelist init
Jan 21 13:45:36 compute-0 ceph-osd[87843]: freelist _read_cfg
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs umount
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) close
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bdev(0x557950ab3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluefs mount shared_bdev_used = 27262976
Jan 21 13:45:36 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: RocksDB version: 7.9.2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Git sha 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: DB SUMMARY
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: DB Session ID:  1BG515NGS1LBB8AUUWF9
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: CURRENT file:  CURRENT
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: IDENTITY file:  IDENTITY
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.error_if_exists: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.create_if_missing: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.paranoid_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                     Options.env: 0x557950af9f80
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                Options.info_log: 0x557950d0b2a0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_file_opening_threads: 16
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                              Options.statistics: (nil)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.use_fsync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.max_log_file_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.allow_fallocate: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.use_direct_reads: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.create_missing_column_families: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                              Options.db_log_dir: 
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                                 Options.wal_dir: db.wal
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.advise_random_on_open: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.write_buffer_manager: 0x557950ba5900
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                            Options.rate_limiter: (nil)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.unordered_write: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.row_cache: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                              Options.wal_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.allow_ingest_behind: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.two_write_queues: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.manual_wal_flush: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.wal_compression: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.atomic_flush: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.log_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.allow_data_in_errors: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.db_host_id: __hostname__
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_background_jobs: 4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_background_compactions: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_subcompactions: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.max_open_files: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.bytes_per_sync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.max_background_flushes: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Compression algorithms supported:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kZSTD supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kXpressCompression supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kBZip2Compression supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kLZ4Compression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kZlibCompression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kLZ4HCCompression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         kSnappyCompression supported: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca7a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a020)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a020)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:           Options.merge_operator: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.compaction_filter_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.sst_partitioner_factory: None
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557950d0a020)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55794fca74b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.write_buffer_size: 16777216
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.max_write_buffer_number: 64
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.compression: LZ4
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.num_levels: 7
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.level: 32767
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.compression_opts.strategy: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                  Options.compression_opts.enabled: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.arena_block_size: 1048576
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.disable_auto_compactions: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.inplace_update_support: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.bloom_locality: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                    Options.max_successive_merges: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.paranoid_file_checks: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.force_consistency_checks: 1
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.report_bg_io_stats: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                               Options.ttl: 2592000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                       Options.enable_blob_files: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                           Options.min_blob_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                          Options.blob_file_size: 268435456
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb:                Options.blob_file_starting_level: 0
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d483eaa3-2246-48d8-b690-0a189d5aa6bb
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136874055, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136886035, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d483eaa3-2246-48d8-b690-0a189d5aa6bb", "db_session_id": "1BG515NGS1LBB8AUUWF9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136889244, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d483eaa3-2246-48d8-b690-0a189d5aa6bb", "db_session_id": "1BG515NGS1LBB8AUUWF9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136919472, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d483eaa3-2246-48d8-b690-0a189d5aa6bb", "db_session_id": "1BG515NGS1LBB8AUUWF9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003136921854, "job": 1, "event": "recovery_finished"}
Jan 21 13:45:36 compute-0 ceph-osd[87843]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 21 13:45:36 compute-0 podman[88323]: 2026-01-21 13:45:36.995303421 +0000 UTC m=+0.057145197 container create 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 13:45:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557950ee3c00
Jan 21 13:45:37 compute-0 podman[88323]: 2026-01-21 13:45:36.966237846 +0000 UTC m=+0.028079622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:37 compute-0 ceph-osd[87843]: rocksdb: DB pointer 0x557950eb8000
Jan 21 13:45:37 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 21 13:45:37 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 21 13:45:37 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 21 13:45:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:45:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 13:45:37 compute-0 systemd[1]: Started libpod-conmon-0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350.scope.
Jan 21 13:45:37 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 21 13:45:37 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 21 13:45:37 compute-0 ceph-osd[87843]: _get_class not permitted to load lua
Jan 21 13:45:37 compute-0 ceph-osd[87843]: _get_class not permitted to load sdk
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 load_pgs
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 load_pgs opened 0 pgs
Jan 21 13:45:37 compute-0 ceph-osd[87843]: osd.2 0 log_to_monitors true
Jan 21 13:45:37 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2[87839]: 2026-01-21T13:45:37.068+0000 7f6e1fa298c0 -1 osd.2 0 log_to_monitors true
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 21 13:45:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:37 compute-0 ceph-mon[75031]: pgmap v29: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 21 13:45:37 compute-0 ceph-mon[75031]: osdmap e12: 3 total, 1 up, 3 in
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:37 compute-0 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 21 13:45:37 compute-0 podman[88323]: 2026-01-21 13:45:37.125604105 +0000 UTC m=+0.187445891 container init 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:45:37 compute-0 podman[88323]: 2026-01-21 13:45:37.1337605 +0000 UTC m=+0.195602266 container start 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:37 compute-0 unruffled_villani[88344]: 167 167
Jan 21 13:45:37 compute-0 systemd[1]: libpod-0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350.scope: Deactivated successfully.
Jan 21 13:45:37 compute-0 podman[88323]: 2026-01-21 13:45:37.156075954 +0000 UTC m=+0.217917720 container attach 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:37 compute-0 podman[88323]: 2026-01-21 13:45:37.156962354 +0000 UTC m=+0.218804120 container died 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-172315e706a0db28320083bb67857eb543d27c1562bdb0a6037e88a20cbdda84-merged.mount: Deactivated successfully.
Jan 21 13:45:37 compute-0 podman[88323]: 2026-01-21 13:45:37.301695134 +0000 UTC m=+0.363536900 container remove 0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:45:37 compute-0 systemd[1]: libpod-conmon-0de84289301774906aed8c592c5aa27dc199d9a363fc7f763cc8dbe386288350.scope: Deactivated successfully.
Jan 21 13:45:37 compute-0 podman[88396]: 2026-01-21 13:45:37.466526913 +0000 UTC m=+0.053461149 container create c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 13:45:37 compute-0 podman[88396]: 2026-01-21 13:45:37.437096439 +0000 UTC m=+0.024030725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:37 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:37 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:37 compute-0 systemd[1]: Started libpod-conmon-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope.
Jan 21 13:45:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 21 13:45:37 compute-0 podman[88396]: 2026-01-21 13:45:37.615947174 +0000 UTC m=+0.202881500 container init c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:37 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:37 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:37 compute-0 podman[88396]: 2026-01-21 13:45:37.628042224 +0000 UTC m=+0.214976500 container start c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 13:45:37 compute-0 podman[88396]: 2026-01-21 13:45:37.652444677 +0000 UTC m=+0.239378943 container attach c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:45:38 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 21 13:45:38 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 21 13:45:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:38 compute-0 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 21 13:45:38 compute-0 ceph-mon[75031]: osdmap e13: 3 total, 1 up, 3 in
Jan 21 13:45:38 compute-0 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 21 13:45:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:38 compute-0 lvm[88487]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:38 compute-0 lvm[88487]: VG ceph_vg0 finished
Jan 21 13:45:38 compute-0 lvm[88489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:45:38 compute-0 lvm[88489]: VG ceph_vg1 finished
Jan 21 13:45:38 compute-0 lvm[88490]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:45:38 compute-0 lvm[88490]: VG ceph_vg2 finished
Jan 21 13:45:38 compute-0 lvm[88491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:38 compute-0 lvm[88491]: VG ceph_vg0 finished
Jan 21 13:45:38 compute-0 pedantic_golick[88412]: {}
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.576 iops: 6035.420 elapsed_sec: 0.497
Jan 21 13:45:38 compute-0 ceph-osd[86795]: log_channel(cluster) log [WRN] : OSD bench result of 6035.420070 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 13:45:38 compute-0 systemd[1]: libpod-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope: Deactivated successfully.
Jan 21 13:45:38 compute-0 systemd[1]: libpod-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope: Consumed 1.401s CPU time.
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 0 waiting for initial osdmap
Jan 21 13:45:38 compute-0 podman[88396]: 2026-01-21 13:45:38.496609823 +0000 UTC m=+1.083544059 container died c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 13:45:38 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1[86791]: 2026-01-21T13:45:38.494+0000 7f09a9f99640 -1 osd.1 0 waiting for initial osdmap
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 21 13:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f06f130edd009eb39ff49ac7de9bf15ce6327f74b180c2e6429580a50f6be2b0-merged.mount: Deactivated successfully.
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 21 13:45:38 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-1[86791]: 2026-01-21T13:45:38.529+0000 7f09a458c640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 21 13:45:38 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2636246499; not ready for session (expect reconnect)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:38 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 21 13:45:38 compute-0 podman[88396]: 2026-01-21 13:45:38.547294464 +0000 UTC m=+1.134228700 container remove c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:45:38 compute-0 systemd[1]: libpod-conmon-c1ac4826a702962bc679e08e0714ad70ac80148456504cf816dcbe2f9b9e271f.scope: Deactivated successfully.
Jan 21 13:45:38 compute-0 sudo[87903]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0 done with init, starting boot process
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0 start_boot
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 21 13:45:38 compute-0 ceph-osd[87843]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499] boot
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 14 state: booting -> active
Jan 21 13:45:38 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:38 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:38 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:38 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:38 compute-0 sudo[88509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:45:38 compute-0 sudo[88509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:38 compute-0 sudo[88509]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:38 compute-0 sudo[88534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:38 compute-0 sudo[88534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:38 compute-0 sudo[88534]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:38 compute-0 sudo[88559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:45:38 compute-0 sudo[88559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:39 compute-0 ceph-mon[75031]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 13:45:39 compute-0 ceph-mon[75031]: OSD bench result of 6035.420070 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 21 13:45:39 compute-0 ceph-mon[75031]: osd.1 [v2:192.168.122.100:6806/2636246499,v1:192.168.122.100:6807/2636246499] boot
Jan 21 13:45:39 compute-0 ceph-mon[75031]: osdmap e14: 3 total, 2 up, 3 in
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:39 compute-0 podman[88629]: 2026-01-21 13:45:39.188598812 +0000 UTC m=+0.066532091 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:39 compute-0 podman[88629]: 2026-01-21 13:45:39.281191035 +0000 UTC m=+0.159124284 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:45:39
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 21 13:45:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 13:45:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 21 13:45:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 21 13:45:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:39 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: [devicehealth INFO root] creating main.db for devicehealth
Jan 21 13:45:39 compute-0 sudo[88559]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 13:45:39 compute-0 ceph-mgr[75322]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 21 13:45:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 13:45:39 compute-0 sudo[88786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:39 compute-0 sudo[88786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:39 compute-0 sudo[88786]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:39 compute-0 sudo[88811]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 21 13:45:39 compute-0 sudo[88811]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 21 13:45:39 compute-0 sudo[88811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 21 13:45:40 compute-0 sudo[88813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- inventory --format=json-pretty --filter-for-batch
Jan 21 13:45:40 compute-0 sudo[88813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:40 compute-0 sudo[88811]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 13:45:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 21 13:45:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 13:45:40 compute-0 ceph-mon[75031]: purged_snaps scrub starts
Jan 21 13:45:40 compute-0 ceph-mon[75031]: purged_snaps scrub ok
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:40 compute-0 ceph-mon[75031]: osdmap e15: 3 total, 2 up, 3 in
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 21 13:45:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.324290336 +0000 UTC m=+0.057508886 container create e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.290410996 +0000 UTC m=+0.023629566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:40 compute-0 systemd[1]: Started libpod-conmon-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope.
Jan 21 13:45:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.450440691 +0000 UTC m=+0.183659261 container init e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.457642714 +0000 UTC m=+0.190861264 container start e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:40 compute-0 interesting_dhawan[88868]: 167 167
Jan 21 13:45:40 compute-0 systemd[1]: libpod-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope: Deactivated successfully.
Jan 21 13:45:40 compute-0 conmon[88868]: conmon e9e422b5a42a144fdf4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope/container/memory.events
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.488174193 +0000 UTC m=+0.221392773 container attach e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.488497721 +0000 UTC m=+0.221716281 container died e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-22b4d159e22d7e8af2715224b3ec6b6360c8fe84f526ee77f816f4d33c405c4d-merged.mount: Deactivated successfully.
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 13:45:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:40 compute-0 podman[88852]: 2026-01-21 13:45:40.64201175 +0000 UTC m=+0.375230310 container remove e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:45:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 21 13:45:40 compute-0 systemd[1]: libpod-conmon-e9e422b5a42a144fdf4f9b85d03d85733ef0373cda9446213cd6f6943f39976c.scope: Deactivated successfully.
Jan 21 13:45:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 21 13:45:40 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 21 13:45:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:40 compute-0 podman[88894]: 2026-01-21 13:45:40.832596215 +0000 UTC m=+0.047266430 container create ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 13:45:40 compute-0 systemd[1]: Started libpod-conmon-ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb.scope.
Jan 21 13:45:40 compute-0 podman[88894]: 2026-01-21 13:45:40.81062809 +0000 UTC m=+0.025298315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 21 13:45:40 compute-0 podman[88894]: 2026-01-21 13:45:40.960874553 +0000 UTC m=+0.175544768 container init ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:45:40 compute-0 podman[88894]: 2026-01-21 13:45:40.967841971 +0000 UTC m=+0.182512166 container start ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:45:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:45:40 compute-0 podman[88894]: 2026-01-21 13:45:40.998357998 +0000 UTC m=+0.213028193 container attach ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.tnwklj(active, since 61s)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 21 13:45:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: osdmap e16: 3 total, 2 up, 3 in
Jan 21 13:45:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mgrmap e10: compute-0.tnwklj(active, since 61s)
Jan 21 13:45:41 compute-0 charming_kirch[88910]: [
Jan 21 13:45:41 compute-0 charming_kirch[88910]:     {
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "available": false,
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "being_replaced": false,
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "ceph_device_lvm": false,
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "lsm_data": {},
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "lvs": [],
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "path": "/dev/sr0",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "rejected_reasons": [
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "Has a FileSystem",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "Insufficient space (<5GB)"
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         ],
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         "sys_api": {
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "actuators": null,
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "device_nodes": [
Jan 21 13:45:41 compute-0 charming_kirch[88910]:                 "sr0"
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             ],
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "devname": "sr0",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "human_readable_size": "482.00 KB",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "id_bus": "ata",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "model": "QEMU DVD-ROM",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "nr_requests": "2",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "parent": "/dev/sr0",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "partitions": {},
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "path": "/dev/sr0",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "removable": "1",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "rev": "2.5+",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "ro": "0",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "rotational": "1",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "sas_address": "",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "sas_device_handle": "",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "scheduler_mode": "mq-deadline",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "sectors": 0,
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "sectorsize": "2048",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "size": 493568.0,
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "support_discard": "2048",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "type": "disk",
Jan 21 13:45:41 compute-0 charming_kirch[88910]:             "vendor": "QEMU"
Jan 21 13:45:41 compute-0 charming_kirch[88910]:         }
Jan 21 13:45:41 compute-0 charming_kirch[88910]:     }
Jan 21 13:45:41 compute-0 charming_kirch[88910]: ]
Jan 21 13:45:41 compute-0 systemd[1]: libpod-ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb.scope: Deactivated successfully.
Jan 21 13:45:41 compute-0 podman[88894]: 2026-01-21 13:45:41.49598854 +0000 UTC m=+0.710658735 container died ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 13:45:41 compute-0 sudo[89728]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pufiptmqkjamsegglvmaenyzaevhzcqm ; /usr/bin/python3'
Jan 21 13:45:41 compute-0 sudo[89728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-334d24a41a1a5d51b86470f742b4508cecb59767ce7f671d025527e16f8b035a-merged.mount: Deactivated successfully.
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:41 compute-0 python3[89735]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:41 compute-0 podman[88894]: 2026-01-21 13:45:41.685912665 +0000 UTC m=+0.900582860 container remove ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:41 compute-0 systemd[1]: libpod-conmon-ecd91010a4de43b963fd856b9a93b306f4ff32da1523ee7c9331634fc4f6deeb.scope: Deactivated successfully.
Jan 21 13:45:41 compute-0 sudo[88813]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:41 compute-0 podman[89738]: 2026-01-21 13:45:41.753582508 +0000 UTC m=+0.048774169 container create 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43689k
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43689k
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 21 13:45:41 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:45:41 compute-0 systemd[1]: Started libpod-conmon-669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17.scope.
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:45:41 compute-0 podman[89738]: 2026-01-21 13:45:41.730575263 +0000 UTC m=+0.025766964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:45:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:41 compute-0 podman[89738]: 2026-01-21 13:45:41.878828342 +0000 UTC m=+0.174020033 container init 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 21 13:45:41 compute-0 sudo[89760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:41 compute-0 sudo[89760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:41 compute-0 podman[89738]: 2026-01-21 13:45:41.884929109 +0000 UTC m=+0.180120780 container start 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:41 compute-0 sudo[89760]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:41 compute-0 podman[89738]: 2026-01-21 13:45:41.90857471 +0000 UTC m=+0.203766381 container attach 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:41 compute-0 sudo[89786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:45:41 compute-0 sudo[89786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.330461363 +0000 UTC m=+0.117591440 container create 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.246002485 +0000 UTC m=+0.033132592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:45:42 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.tnwklj(active, since 62s)
Jan 21 13:45:42 compute-0 systemd[1]: Started libpod-conmon-8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9.scope.
Jan 21 13:45:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 13:45:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8715348' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:45:42 compute-0 thirsty_elion[89757]: 
Jan 21 13:45:42 compute-0 thirsty_elion[89757]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":2,"osd_up_since":1769003138,"num_in_osds":3,"osd_in_since":1769003119,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":894091264,"bytes_avail":42047193088,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-21T13:44:18:859596+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T13:45:41.522372+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 21 13:45:42 compute-0 podman[89738]: 2026-01-21 13:45:42.4247519 +0000 UTC m=+0.719943571 container died 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 13:45:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:42 compute-0 systemd[1]: libpod-669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17.scope: Deactivated successfully.
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.480205958 +0000 UTC m=+0.267336055 container init 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.492658118 +0000 UTC m=+0.279788195 container start 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:42 compute-0 hardcore_hypatia[89860]: 167 167
Jan 21 13:45:42 compute-0 systemd[1]: libpod-8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9.scope: Deactivated successfully.
Jan 21 13:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5597912661e6268110fc44df36d3bf4f683642e96a81b724a0082a0d5632c1aa-merged.mount: Deactivated successfully.
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.520425659 +0000 UTC m=+0.307555736 container attach 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:42 compute-0 podman[89738]: 2026-01-21 13:45:42.553060566 +0000 UTC m=+0.848252237 container remove 669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17 (image=quay.io/ceph/ceph:v20, name=thirsty_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.557590355 +0000 UTC m=+0.344720452 container died 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:42 compute-0 systemd[1]: libpod-conmon-669a91411156f96987d69d86868bcd50b706a5244bf03869a178db1ad0246a17.scope: Deactivated successfully.
Jan 21 13:45:42 compute-0 sudo[89728]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e65e958054cbb998c07528f0ff384e722fcecab73aec4ceb5ceef5fab0b756c5-merged.mount: Deactivated successfully.
Jan 21 13:45:42 compute-0 ceph-mgr[75322]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2442756555; not ready for session (expect reconnect)
Jan 21 13:45:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:42 compute-0 ceph-mgr[75322]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 21 13:45:42 compute-0 podman[89843]: 2026-01-21 13:45:42.631009688 +0000 UTC m=+0.418139765 container remove 8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hypatia, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:42 compute-0 systemd[1]: libpod-conmon-8313d999d2f703da6f27ab88d0226284aa01c5d0d73cee1004bf73abc3bc88b9.scope: Deactivated successfully.
Jan 21 13:45:42 compute-0 podman[89898]: 2026-01-21 13:45:42.797360354 +0000 UTC m=+0.052856717 container create e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:42 compute-0 systemd[1]: Started libpod-conmon-e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629.scope.
Jan 21 13:45:42 compute-0 podman[89898]: 2026-01-21 13:45:42.772309249 +0000 UTC m=+0.027805602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:42 compute-0 sudo[89941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkkwexirelbixtqjpezbfpfihekjibmn ; /usr/bin/python3'
Jan 21 13:45:42 compute-0 sudo[89941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:42 compute-0 podman[89898]: 2026-01-21 13:45:42.912725578 +0000 UTC m=+0.168221941 container init e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:42 compute-0 podman[89898]: 2026-01-21 13:45:42.920174648 +0000 UTC m=+0.175670981 container start e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:42 compute-0 podman[89898]: 2026-01-21 13:45:42.93017202 +0000 UTC m=+0.185668353 container attach e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.973 iops: 7416.996 elapsed_sec: 0.404
Jan 21 13:45:42 compute-0 ceph-osd[87843]: log_channel(cluster) log [WRN] : OSD bench result of 7416.996137 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 0 waiting for initial osdmap
Jan 21 13:45:42 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2[87839]: 2026-01-21T13:45:42.930+0000 7f6e1c1bd640 -1 osd.2 0 waiting for initial osdmap
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 21 13:45:42 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-osd-2[87839]: 2026-01-21T13:45:42.960+0000 7f6e167b0640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 21 13:45:42 compute-0 ceph-osd[87843]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 21 13:45:43 compute-0 python3[89943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:43 compute-0 podman[89946]: 2026-01-21 13:45:43.065764172 +0000 UTC m=+0.021551701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 21 13:45:43 compute-0 podman[89946]: 2026-01-21 13:45:43.385617553 +0000 UTC m=+0.341405042 container create 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:43 compute-0 friendly_cartwright[89918]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:45:43 compute-0 friendly_cartwright[89918]: --> All data devices are unavailable
Jan 21 13:45:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 21 13:45:43 compute-0 systemd[1]: libpod-e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629.scope: Deactivated successfully.
Jan 21 13:45:43 compute-0 podman[89898]: 2026-01-21 13:45:43.513232284 +0000 UTC m=+0.768728677 container died e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 13:45:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555] boot
Jan 21 13:45:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 21 13:45:43 compute-0 systemd[1]: Started libpod-conmon-6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894.scope.
Jan 21 13:45:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da28567b720fe8043f2ae903a33e259a488b1875a2724821e75bbc6102c8aa98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da28567b720fe8043f2ae903a33e259a488b1875a2724821e75bbc6102c8aa98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 21 13:45:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:43 compute-0 ceph-osd[87843]: osd.2 17 state: booting -> active
Jan 21 13:45:43 compute-0 ceph-mon[75031]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 13:45:43 compute-0 ceph-mon[75031]: Adjusting osd_memory_target on compute-0 to 43689k
Jan 21 13:45:43 compute-0 ceph-mon[75031]: Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Jan 21 13:45:43 compute-0 ceph-mon[75031]: mgrmap e11: compute-0.tnwklj(active, since 62s)
Jan 21 13:45:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/8715348' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:45:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:43 compute-0 ceph-mon[75031]: OSD bench result of 7416.996137 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 21 13:45:43 compute-0 podman[89946]: 2026-01-21 13:45:43.62201472 +0000 UTC m=+0.577802289 container init 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 13:45:43 compute-0 podman[89946]: 2026-01-21 13:45:43.631034807 +0000 UTC m=+0.586822326 container start 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 13:45:43 compute-0 podman[89946]: 2026-01-21 13:45:43.640523586 +0000 UTC m=+0.596311175 container attach 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cfe29a3d90e73fbcffe0320d551c1990425f6d2465b7a15068e070f4f6a8946-merged.mount: Deactivated successfully.
Jan 21 13:45:43 compute-0 podman[89898]: 2026-01-21 13:45:43.768090386 +0000 UTC m=+1.023586759 container remove e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:45:43 compute-0 systemd[1]: libpod-conmon-e2afbedab00d75124e46eeeecc4aeacf9883eee665f79e9bd6706aefca9e7629.scope: Deactivated successfully.
Jan 21 13:45:43 compute-0 sudo[89786]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:43 compute-0 sudo[90013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:43 compute-0 sudo[90013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:43 compute-0 sudo[90013]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:43 compute-0 sudo[90038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:45:43 compute-0 sudo[90038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 13:45:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.233945101 +0000 UTC m=+0.054729062 container create 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:44 compute-0 systemd[1]: Started libpod-conmon-9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59.scope.
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.207066612 +0000 UTC m=+0.027850593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.315056069 +0000 UTC m=+0.135840050 container init 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.321293239 +0000 UTC m=+0.142077200 container start 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.325778287 +0000 UTC m=+0.146562238 container attach 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:44 compute-0 friendly_mendel[90096]: 167 167
Jan 21 13:45:44 compute-0 systemd[1]: libpod-9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59.scope: Deactivated successfully.
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.327309844 +0000 UTC m=+0.148093795 container died 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7469e86b0810ee158e233ae6977860b97257686e3a088aa6c44769a360e53181-merged.mount: Deactivated successfully.
Jan 21 13:45:44 compute-0 podman[90079]: 2026-01-21 13:45:44.373378306 +0000 UTC m=+0.194162257 container remove 9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:44 compute-0 systemd[1]: libpod-conmon-9d886ab6abea777ce3b143fd0f239a44f0947ef64ea379ffa29a06256067ea59.scope: Deactivated successfully.
Jan 21 13:45:44 compute-0 podman[90119]: 2026-01-21 13:45:44.54462628 +0000 UTC m=+0.053368269 container create 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 13:45:44 compute-0 systemd[1]: Started libpod-conmon-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope.
Jan 21 13:45:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 21 13:45:44 compute-0 ceph-mon[75031]: pgmap v39: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 21 13:45:44 compute-0 ceph-mon[75031]: osd.2 [v2:192.168.122.100:6810/2442756555,v1:192.168.122.100:6811/2442756555] boot
Jan 21 13:45:44 compute-0 ceph-mon[75031]: osdmap e17: 3 total, 3 up, 3 in
Jan 21 13:45:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 21 13:45:44 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 21 13:45:44 compute-0 festive_zhukovsky[89989]: pool 'vms' created
Jan 21 13:45:44 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 21 13:45:44 compute-0 podman[90119]: 2026-01-21 13:45:44.525313264 +0000 UTC m=+0.034055283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:44 compute-0 systemd[1]: libpod-6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894.scope: Deactivated successfully.
Jan 21 13:45:44 compute-0 podman[89946]: 2026-01-21 13:45:44.636323903 +0000 UTC m=+1.592111392 container died 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:45:44 compute-0 podman[90119]: 2026-01-21 13:45:44.666093652 +0000 UTC m=+0.174835671 container init 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-da28567b720fe8043f2ae903a33e259a488b1875a2724821e75bbc6102c8aa98-merged.mount: Deactivated successfully.
Jan 21 13:45:44 compute-0 podman[90119]: 2026-01-21 13:45:44.676001981 +0000 UTC m=+0.184743980 container start 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:44 compute-0 podman[89946]: 2026-01-21 13:45:44.694349164 +0000 UTC m=+1.650136683 container remove 6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:44 compute-0 podman[90119]: 2026-01-21 13:45:44.699818866 +0000 UTC m=+0.208560885 container attach 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:44 compute-0 sudo[89941]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:44 compute-0 systemd[1]: libpod-conmon-6f73223198db335050d9d8a9c19ed2743bbf88d440272e16cd7da929a47f8894.scope: Deactivated successfully.
Jan 21 13:45:44 compute-0 sudo[90176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxvuursteyvmzdsblkrttzhybfvelesi ; /usr/bin/python3'
Jan 21 13:45:44 compute-0 sudo[90176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:44 compute-0 pensive_hellman[90135]: {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:     "0": [
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:         {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "devices": [
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "/dev/loop3"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             ],
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_name": "ceph_lv0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_size": "21470642176",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "name": "ceph_lv0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "tags": {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cluster_name": "ceph",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.crush_device_class": "",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.encrypted": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.objectstore": "bluestore",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osd_id": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.type": "block",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.vdo": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.with_tpm": "0"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             },
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "type": "block",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "vg_name": "ceph_vg0"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:         }
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:     ],
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:     "1": [
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:         {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "devices": [
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "/dev/loop4"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             ],
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_name": "ceph_lv1",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_size": "21470642176",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "name": "ceph_lv1",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "tags": {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cluster_name": "ceph",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.crush_device_class": "",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.encrypted": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.objectstore": "bluestore",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osd_id": "1",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.type": "block",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.vdo": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.with_tpm": "0"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             },
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "type": "block",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "vg_name": "ceph_vg1"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:         }
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:     ],
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:     "2": [
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:         {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "devices": [
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "/dev/loop5"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             ],
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_name": "ceph_lv2",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_size": "21470642176",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "name": "ceph_lv2",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "tags": {
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.cluster_name": "ceph",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.crush_device_class": "",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.encrypted": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.objectstore": "bluestore",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osd_id": "2",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.type": "block",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.vdo": "0",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:                 "ceph.with_tpm": "0"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             },
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "type": "block",
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:             "vg_name": "ceph_vg2"
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:         }
Jan 21 13:45:44 compute-0 pensive_hellman[90135]:     ]
Jan 21 13:45:44 compute-0 pensive_hellman[90135]: }
Jan 21 13:45:44 compute-0 systemd[1]: libpod-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope: Deactivated successfully.
Jan 21 13:45:44 compute-0 conmon[90135]: conmon 54fd8e316c134d32daf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope/container/memory.events
Jan 21 13:45:45 compute-0 podman[90181]: 2026-01-21 13:45:45.042442066 +0000 UTC m=+0.028645722 container died 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 13:45:45 compute-0 python3[90178]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v42: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 13:45:45 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:45 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4103319343' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:45 compute-0 ceph-mon[75031]: osdmap e18: 3 total, 3 up, 3 in
Jan 21 13:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6c41ed8576e9bfab94bef5e918de7ed8d1e402da1d84d3c186bd166937709d1-merged.mount: Deactivated successfully.
Jan 21 13:45:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 21 13:45:45 compute-0 podman[90181]: 2026-01-21 13:45:45.866091709 +0000 UTC m=+0.852295395 container remove 54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hellman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 21 13:45:45 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 21 13:45:45 compute-0 systemd[1]: libpod-conmon-54fd8e316c134d32daf1d290a8d1e88c964ede36c2467c326a63ab41d3093713.scope: Deactivated successfully.
Jan 21 13:45:45 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:45 compute-0 sudo[90038]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:45 compute-0 podman[90194]: 2026-01-21 13:45:45.965290423 +0000 UTC m=+0.887238298 container create d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 13:45:45 compute-0 sudo[90205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:45:45 compute-0 sudo[90205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:45 compute-0 sudo[90205]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:45 compute-0 systemd[1]: Started libpod-conmon-d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6.scope.
Jan 21 13:45:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6cb686b2c137970617a1bcacf25ec6fe5b4d14256e24f83102569416b75672/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6cb686b2c137970617a1bcacf25ec6fe5b4d14256e24f83102569416b75672/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:46 compute-0 podman[90194]: 2026-01-21 13:45:45.938750033 +0000 UTC m=+0.860697938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:46 compute-0 podman[90194]: 2026-01-21 13:45:46.038538472 +0000 UTC m=+0.960486377 container init d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:46 compute-0 podman[90194]: 2026-01-21 13:45:46.046128245 +0000 UTC m=+0.968076130 container start d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:46 compute-0 sudo[90235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:45:46 compute-0 podman[90194]: 2026-01-21 13:45:46.051593857 +0000 UTC m=+0.973541732 container attach d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:46 compute-0 sudo[90235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.316290446 +0000 UTC m=+0.036320268 container create 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 13:45:46 compute-0 systemd[1]: Started libpod-conmon-74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03.scope.
Jan 21 13:45:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.385754423 +0000 UTC m=+0.105784265 container init 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.392849495 +0000 UTC m=+0.112879317 container start 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.300873904 +0000 UTC m=+0.020903756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:46 compute-0 reverent_noether[90312]: 167 167
Jan 21 13:45:46 compute-0 systemd[1]: libpod-74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03.scope: Deactivated successfully.
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.405629073 +0000 UTC m=+0.125658975 container attach 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.406160895 +0000 UTC m=+0.126190737 container died 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01d9faf4e6817f7dfc0573dd3a5b294bf043d9421afc5480a815cc97d121db4-merged.mount: Deactivated successfully.
Jan 21 13:45:46 compute-0 podman[90295]: 2026-01-21 13:45:46.448486967 +0000 UTC m=+0.168516799 container remove 74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_noether, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:45:46 compute-0 systemd[1]: libpod-conmon-74610185b02678b352ca4f40c2d3499b9ec3bcb851a72a9ac2a43675455eef03.scope: Deactivated successfully.
Jan 21 13:45:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 13:45:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:46 compute-0 podman[90339]: 2026-01-21 13:45:46.605264631 +0000 UTC m=+0.053737048 container create 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:45:46 compute-0 systemd[1]: Started libpod-conmon-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope.
Jan 21 13:45:46 compute-0 podman[90339]: 2026-01-21 13:45:46.579269154 +0000 UTC m=+0.027741651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:45:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:46 compute-0 podman[90339]: 2026-01-21 13:45:46.708657737 +0000 UTC m=+0.157130194 container init 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:45:46 compute-0 podman[90339]: 2026-01-21 13:45:46.722527562 +0000 UTC m=+0.171000019 container start 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 13:45:46 compute-0 podman[90339]: 2026-01-21 13:45:46.727423641 +0000 UTC m=+0.175896108 container attach 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:45:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 21 13:45:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 21 13:45:46 compute-0 elegant_haibt[90243]: pool 'volumes' created
Jan 21 13:45:46 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 21 13:45:46 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:46 compute-0 ceph-mon[75031]: pgmap v42: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 13:45:46 compute-0 ceph-mon[75031]: osdmap e19: 3 total, 3 up, 3 in
Jan 21 13:45:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:46 compute-0 systemd[1]: libpod-d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6.scope: Deactivated successfully.
Jan 21 13:45:46 compute-0 podman[90194]: 2026-01-21 13:45:46.899078254 +0000 UTC m=+1.821026179 container died d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d6cb686b2c137970617a1bcacf25ec6fe5b4d14256e24f83102569416b75672-merged.mount: Deactivated successfully.
Jan 21 13:45:46 compute-0 podman[90194]: 2026-01-21 13:45:46.952119774 +0000 UTC m=+1.874067689 container remove d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6 (image=quay.io/ceph/ceph:v20, name=elegant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:46 compute-0 systemd[1]: libpod-conmon-d916295925f7fc8fd6fa7ca2f8db58274533bad004e2bc57e7c7683c3df05ac6.scope: Deactivated successfully.
Jan 21 13:45:46 compute-0 sudo[90176]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:47 compute-0 sudo[90408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecndbbczzogwcvygyinxymbgdyoyksxc ; /usr/bin/python3'
Jan 21 13:45:47 compute-0 sudo[90408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:47 compute-0 python3[90414]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.299220953 +0000 UTC m=+0.041522894 container create adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 13:45:47 compute-0 systemd[1]: Started libpod-conmon-adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f.scope.
Jan 21 13:45:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a189270390c06c0329f38073aaa279d98f5bd4e082ed0a9a29c37b94cbafbc9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a189270390c06c0329f38073aaa279d98f5bd4e082ed0a9a29c37b94cbafbc9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.376127569 +0000 UTC m=+0.118429530 container init adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.281185867 +0000 UTC m=+0.023487828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.382144264 +0000 UTC m=+0.124446205 container start adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.386232923 +0000 UTC m=+0.128534864 container attach adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:45:47 compute-0 lvm[90491]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:47 compute-0 lvm[90491]: VG ceph_vg0 finished
Jan 21 13:45:47 compute-0 lvm[90490]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:45:47 compute-0 lvm[90490]: VG ceph_vg1 finished
Jan 21 13:45:47 compute-0 lvm[90493]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:45:47 compute-0 lvm[90493]: VG ceph_vg2 finished
Jan 21 13:45:47 compute-0 lvm[90496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:45:47 compute-0 lvm[90496]: VG ceph_vg0 finished
Jan 21 13:45:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v45: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 13:45:47 compute-0 modest_dirac[90355]: {}
Jan 21 13:45:47 compute-0 systemd[1]: libpod-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope: Deactivated successfully.
Jan 21 13:45:47 compute-0 podman[90339]: 2026-01-21 13:45:47.610604879 +0000 UTC m=+1.059077326 container died 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 13:45:47 compute-0 systemd[1]: libpod-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope: Consumed 1.364s CPU time.
Jan 21 13:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f2191c7d3f08c21564364c7db76c04b265152261c05af10f813a9f2af07523-merged.mount: Deactivated successfully.
Jan 21 13:45:47 compute-0 podman[90339]: 2026-01-21 13:45:47.655677808 +0000 UTC m=+1.104150245 container remove 6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:47 compute-0 systemd[1]: libpod-conmon-6107b453358cf499c01b42a3b10f33e6423ffb89b012b64c81eb1e8020f31acd.scope: Deactivated successfully.
Jan 21 13:45:47 compute-0 sudo[90235]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:45:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:45:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:47 compute-0 sudo[90527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:45:47 compute-0 sudo[90527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:45:47 compute-0 sudo[90527]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 13:45:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 21 13:45:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 21 13:45:47 compute-0 eloquent_booth[90478]: pool 'backups' created
Jan 21 13:45:47 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 21 13:45:47 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/267142281' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:47 compute-0 ceph-mon[75031]: osdmap e20: 3 total, 3 up, 3 in
Jan 21 13:45:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:45:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2835557232' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:47 compute-0 ceph-mon[75031]: osdmap e21: 3 total, 3 up, 3 in
Jan 21 13:45:47 compute-0 systemd[1]: libpod-adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f.scope: Deactivated successfully.
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.903991081 +0000 UTC m=+0.646293042 container died adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a189270390c06c0329f38073aaa279d98f5bd4e082ed0a9a29c37b94cbafbc9-merged.mount: Deactivated successfully.
Jan 21 13:45:47 compute-0 podman[90449]: 2026-01-21 13:45:47.948594758 +0000 UTC m=+0.690896699 container remove adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f (image=quay.io/ceph/ceph:v20, name=eloquent_booth, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:47 compute-0 systemd[1]: libpod-conmon-adba9ad47e4920ebb3f229266e25033a845f7b0d4874db53ac55c2085df6803f.scope: Deactivated successfully.
Jan 21 13:45:47 compute-0 sudo[90408]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:48 compute-0 sudo[90591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmojbucontkwpumtkaaqejehrjalyanp ; /usr/bin/python3'
Jan 21 13:45:48 compute-0 sudo[90591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:48 compute-0 python3[90593]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:48 compute-0 podman[90594]: 2026-01-21 13:45:48.40768979 +0000 UTC m=+0.063680269 container create 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:45:48 compute-0 systemd[1]: Started libpod-conmon-89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1.scope.
Jan 21 13:45:48 compute-0 podman[90594]: 2026-01-21 13:45:48.382464141 +0000 UTC m=+0.038454630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1cfae45ea9ade2a41d0fae9e1914f6cd9ff29fc9cf151a6b60ecdc4f54235f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1cfae45ea9ade2a41d0fae9e1914f6cd9ff29fc9cf151a6b60ecdc4f54235f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:48 compute-0 podman[90594]: 2026-01-21 13:45:48.50547173 +0000 UTC m=+0.161462219 container init 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:45:48 compute-0 podman[90594]: 2026-01-21 13:45:48.51166919 +0000 UTC m=+0.167659669 container start 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:48 compute-0 podman[90594]: 2026-01-21 13:45:48.517175503 +0000 UTC m=+0.173166002 container attach 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 21 13:45:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 21 13:45:48 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 21 13:45:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 13:45:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:48 compute-0 ceph-mon[75031]: pgmap v45: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 13:45:48 compute-0 ceph-mon[75031]: osdmap e22: 3 total, 3 up, 3 in
Jan 21 13:45:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v48: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 13:45:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 21 13:45:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 21 13:45:49 compute-0 eager_roentgen[90609]: pool 'images' created
Jan 21 13:45:49 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 21 13:45:49 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/967236663' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:49 compute-0 ceph-mon[75031]: osdmap e23: 3 total, 3 up, 3 in
Jan 21 13:45:49 compute-0 systemd[1]: libpod-89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1.scope: Deactivated successfully.
Jan 21 13:45:49 compute-0 podman[90594]: 2026-01-21 13:45:49.940460058 +0000 UTC m=+1.596450527 container died 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b1cfae45ea9ade2a41d0fae9e1914f6cd9ff29fc9cf151a6b60ecdc4f54235f-merged.mount: Deactivated successfully.
Jan 21 13:45:49 compute-0 podman[90594]: 2026-01-21 13:45:49.979413499 +0000 UTC m=+1.635403968 container remove 89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1 (image=quay.io/ceph/ceph:v20, name=eager_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:49 compute-0 systemd[1]: libpod-conmon-89342e3ad57ae5bc390750bc45efe9c7f25946d46cd5b07f33c90aace8b6b5f1.scope: Deactivated successfully.
Jan 21 13:45:50 compute-0 sudo[90591]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:50 compute-0 sudo[90670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevtijexosihjsxjhchpppnqxttheafi ; /usr/bin/python3'
Jan 21 13:45:50 compute-0 sudo[90670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:50 compute-0 python3[90672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:50 compute-0 podman[90673]: 2026-01-21 13:45:50.404189033 +0000 UTC m=+0.048759438 container create c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:45:50 compute-0 systemd[1]: Started libpod-conmon-c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a.scope.
Jan 21 13:45:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ad0e0507424ea21e76b3510031fb9415f1aa3e69520b41384c6f33348ccee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ad0e0507424ea21e76b3510031fb9415f1aa3e69520b41384c6f33348ccee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:50 compute-0 podman[90673]: 2026-01-21 13:45:50.383906183 +0000 UTC m=+0.028476598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:50 compute-0 podman[90673]: 2026-01-21 13:45:50.490973428 +0000 UTC m=+0.135543843 container init c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:50 compute-0 podman[90673]: 2026-01-21 13:45:50.49728173 +0000 UTC m=+0.141852135 container start c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:45:50 compute-0 podman[90673]: 2026-01-21 13:45:50.501566273 +0000 UTC m=+0.146136668 container attach c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 21 13:45:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 21 13:45:50 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 21 13:45:50 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 13:45:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:50 compute-0 ceph-mon[75031]: pgmap v48: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 21 13:45:50 compute-0 ceph-mon[75031]: osdmap e24: 3 total, 3 up, 3 in
Jan 21 13:45:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v51: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 21 13:45:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 21 13:45:51 compute-0 cranky_hertz[90689]: pool 'cephfs.cephfs.meta' created
Jan 21 13:45:51 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 21 13:45:51 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:51 compute-0 systemd[1]: libpod-c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a.scope: Deactivated successfully.
Jan 21 13:45:51 compute-0 podman[90673]: 2026-01-21 13:45:51.944645797 +0000 UTC m=+1.589216212 container died c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c4ad0e0507424ea21e76b3510031fb9415f1aa3e69520b41384c6f33348ccee-merged.mount: Deactivated successfully.
Jan 21 13:45:51 compute-0 podman[90673]: 2026-01-21 13:45:51.983803232 +0000 UTC m=+1.628373627 container remove c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a (image=quay.io/ceph/ceph:v20, name=cranky_hertz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:45:51 compute-0 systemd[1]: libpod-conmon-c8f975db4a08f5de60459bce001e079ecc75d7fb8a5cba56f4a9ead6df79047a.scope: Deactivated successfully.
Jan 21 13:45:52 compute-0 sudo[90670]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:52 compute-0 sudo[90752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqfdwgeuwbsfqqmqpvrpzopuofxodtwv ; /usr/bin/python3'
Jan 21 13:45:52 compute-0 sudo[90752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:52 compute-0 python3[90754]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:52 compute-0 podman[90755]: 2026-01-21 13:45:52.342289275 +0000 UTC m=+0.044825923 container create 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:52 compute-0 systemd[1]: Started libpod-conmon-5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521.scope.
Jan 21 13:45:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2f096741a432ad332453e79c2541e8a720501fdd6f8079150c2ac1dcbb27d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2f096741a432ad332453e79c2541e8a720501fdd6f8079150c2ac1dcbb27d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:52 compute-0 podman[90755]: 2026-01-21 13:45:52.409719203 +0000 UTC m=+0.112255831 container init 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 13:45:52 compute-0 podman[90755]: 2026-01-21 13:45:52.416436985 +0000 UTC m=+0.118973593 container start 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 13:45:52 compute-0 podman[90755]: 2026-01-21 13:45:52.319026393 +0000 UTC m=+0.021563031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:52 compute-0 podman[90755]: 2026-01-21 13:45:52.420650767 +0000 UTC m=+0.123187395 container attach 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 13:45:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 21 13:45:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 21 13:45:52 compute-0 ceph-mon[75031]: pgmap v51: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3279425259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:52 compute-0 ceph-mon[75031]: osdmap e25: 3 total, 3 up, 3 in
Jan 21 13:45:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 21 13:45:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 21 13:45:52 compute-0 sad_swirles[90770]: pool 'cephfs.cephfs.data' created
Jan 21 13:45:52 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 21 13:45:52 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:52 compute-0 systemd[1]: libpod-5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521.scope: Deactivated successfully.
Jan 21 13:45:52 compute-0 podman[90755]: 2026-01-21 13:45:52.96600097 +0000 UTC m=+0.668537598 container died 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc2f096741a432ad332453e79c2541e8a720501fdd6f8079150c2ac1dcbb27d7-merged.mount: Deactivated successfully.
Jan 21 13:45:53 compute-0 podman[90755]: 2026-01-21 13:45:53.001425206 +0000 UTC m=+0.703961814 container remove 5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521 (image=quay.io/ceph/ceph:v20, name=sad_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:45:53 compute-0 systemd[1]: libpod-conmon-5d158cbdb463ac82576c2fa48865ebb59de939c27d5f92a7b0428432a9077521.scope: Deactivated successfully.
Jan 21 13:45:53 compute-0 sudo[90752]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:53 compute-0 sudo[90834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwntjnhpyvpcppgwgjhunszlliutdij ; /usr/bin/python3'
Jan 21 13:45:53 compute-0 sudo[90834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:53 compute-0 python3[90836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:53 compute-0 podman[90837]: 2026-01-21 13:45:53.386255964 +0000 UTC m=+0.041733807 container create 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 13:45:53 compute-0 systemd[1]: Started libpod-conmon-0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b.scope.
Jan 21 13:45:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ddecaf4929377705ecfd94fe960a272b2ee06c202bfe21bb31ab9b2131cd19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ddecaf4929377705ecfd94fe960a272b2ee06c202bfe21bb31ab9b2131cd19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:53 compute-0 podman[90837]: 2026-01-21 13:45:53.363874904 +0000 UTC m=+0.019352767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:53 compute-0 podman[90837]: 2026-01-21 13:45:53.463159431 +0000 UTC m=+0.118637294 container init 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 13:45:53 compute-0 podman[90837]: 2026-01-21 13:45:53.472339043 +0000 UTC m=+0.127816886 container start 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:53 compute-0 podman[90837]: 2026-01-21 13:45:53.475981531 +0000 UTC m=+0.131459384 container attach 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:45:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:45:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 21 13:45:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 21 13:45:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 21 13:45:53 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3204446367' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 21 13:45:53 compute-0 ceph-mon[75031]: osdmap e26: 3 total, 3 up, 3 in
Jan 21 13:45:53 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 21 13:45:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 13:45:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 21 13:45:53 compute-0 gracious_jones[90852]: enabled application 'rbd' on pool 'vms'
Jan 21 13:45:53 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 21 13:45:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:45:53 compute-0 systemd[1]: libpod-0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b.scope: Deactivated successfully.
Jan 21 13:45:53 compute-0 podman[90837]: 2026-01-21 13:45:53.980262523 +0000 UTC m=+0.635740366 container died 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 13:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ddecaf4929377705ecfd94fe960a272b2ee06c202bfe21bb31ab9b2131cd19-merged.mount: Deactivated successfully.
Jan 21 13:45:54 compute-0 podman[90837]: 2026-01-21 13:45:54.0169925 +0000 UTC m=+0.672470353 container remove 0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b (image=quay.io/ceph/ceph:v20, name=gracious_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 13:45:54 compute-0 systemd[1]: libpod-conmon-0f25b00755e57d081f18112ab83cc220cd28fd59abd7b58673506a7096f8473b.scope: Deactivated successfully.
Jan 21 13:45:54 compute-0 sudo[90834]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:54 compute-0 sudo[90914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmcnamifknzfxscmswcgkhfqhqhuvkqh ; /usr/bin/python3'
Jan 21 13:45:54 compute-0 sudo[90914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:54 compute-0 python3[90916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:54 compute-0 podman[90917]: 2026-01-21 13:45:54.359130758 +0000 UTC m=+0.048976133 container create d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:54 compute-0 systemd[1]: Started libpod-conmon-d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee.scope.
Jan 21 13:45:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b607dcc9ea8f92838c82f5776e3c041cc002254b708f0786abf37554adb97b53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b607dcc9ea8f92838c82f5776e3c041cc002254b708f0786abf37554adb97b53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:54 compute-0 podman[90917]: 2026-01-21 13:45:54.337285021 +0000 UTC m=+0.027130416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:54 compute-0 podman[90917]: 2026-01-21 13:45:54.43213786 +0000 UTC m=+0.121983255 container init d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:54 compute-0 podman[90917]: 2026-01-21 13:45:54.43876011 +0000 UTC m=+0.128605475 container start d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 13:45:54 compute-0 podman[90917]: 2026-01-21 13:45:54.442281396 +0000 UTC m=+0.132126781 container attach d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 13:45:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 21 13:45:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 21 13:45:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 21 13:45:54 compute-0 ceph-mon[75031]: pgmap v54: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:54 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1458626047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 21 13:45:54 compute-0 ceph-mon[75031]: osdmap e27: 3 total, 3 up, 3 in
Jan 21 13:45:54 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 21 13:45:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 13:45:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 21 13:45:54 compute-0 festive_heisenberg[90932]: enabled application 'rbd' on pool 'volumes'
Jan 21 13:45:54 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 21 13:45:55 compute-0 systemd[1]: libpod-d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee.scope: Deactivated successfully.
Jan 21 13:45:55 compute-0 podman[90917]: 2026-01-21 13:45:55.005677445 +0000 UTC m=+0.695522810 container died d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b607dcc9ea8f92838c82f5776e3c041cc002254b708f0786abf37554adb97b53-merged.mount: Deactivated successfully.
Jan 21 13:45:55 compute-0 podman[90917]: 2026-01-21 13:45:55.048015917 +0000 UTC m=+0.737861282 container remove d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee (image=quay.io/ceph/ceph:v20, name=festive_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:55 compute-0 systemd[1]: libpod-conmon-d5df2936601d5602a4bd52150fe4b67abe3c948faf44bef93d99f521813bbeee.scope: Deactivated successfully.
Jan 21 13:45:55 compute-0 sudo[90914]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:55 compute-0 sudo[90993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otlttrrsjrmobshipzxoidyxrlihvtrw ; /usr/bin/python3'
Jan 21 13:45:55 compute-0 sudo[90993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:55 compute-0 python3[90995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:55 compute-0 podman[90996]: 2026-01-21 13:45:55.399655455 +0000 UTC m=+0.047923728 container create 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 13:45:55 compute-0 systemd[1]: Started libpod-conmon-444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96.scope.
Jan 21 13:45:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc5217dd500724f41518dfd77886bbca11f1f3552df460bed4b7942b58a7e40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc5217dd500724f41518dfd77886bbca11f1f3552df460bed4b7942b58a7e40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:55 compute-0 podman[90996]: 2026-01-21 13:45:55.373574876 +0000 UTC m=+0.021843179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:55 compute-0 podman[90996]: 2026-01-21 13:45:55.479934653 +0000 UTC m=+0.128202926 container init 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:45:55 compute-0 podman[90996]: 2026-01-21 13:45:55.484953324 +0000 UTC m=+0.133221607 container start 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 21 13:45:55 compute-0 podman[90996]: 2026-01-21 13:45:55.488363617 +0000 UTC m=+0.136631890 container attach 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:45:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 21 13:45:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 21 13:45:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 21 13:45:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 13:45:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 21 13:45:55 compute-0 peaceful_zhukovsky[91011]: enabled application 'rbd' on pool 'backups'
Jan 21 13:45:55 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2483823024' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 21 13:45:55 compute-0 ceph-mon[75031]: osdmap e28: 3 total, 3 up, 3 in
Jan 21 13:45:55 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 21 13:45:55 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 21 13:45:56 compute-0 systemd[1]: libpod-444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96.scope: Deactivated successfully.
Jan 21 13:45:56 compute-0 podman[90996]: 2026-01-21 13:45:56.006013812 +0000 UTC m=+0.654282095 container died 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dc5217dd500724f41518dfd77886bbca11f1f3552df460bed4b7942b58a7e40-merged.mount: Deactivated successfully.
Jan 21 13:45:56 compute-0 podman[90996]: 2026-01-21 13:45:56.043838225 +0000 UTC m=+0.692106498 container remove 444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96 (image=quay.io/ceph/ceph:v20, name=peaceful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:45:56 compute-0 systemd[1]: libpod-conmon-444696945b5b9e39e5fc0b28827dd32f6b722cab9da4c8cbf820eb20c5212d96.scope: Deactivated successfully.
Jan 21 13:45:56 compute-0 sudo[90993]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:45:56 compute-0 sudo[91069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsxjbkbihkjpcieihzofbouxtfdisdds ; /usr/bin/python3'
Jan 21 13:45:56 compute-0 sudo[91069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:56 compute-0 python3[91071]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:56 compute-0 podman[91072]: 2026-01-21 13:45:56.43430607 +0000 UTC m=+0.055283305 container create b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:45:56 compute-0 systemd[1]: Started libpod-conmon-b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3.scope.
Jan 21 13:45:56 compute-0 podman[91072]: 2026-01-21 13:45:56.411034419 +0000 UTC m=+0.032011654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9bb50bb52a32994440e6946f007b47286777685a6f041f2dd25b5ab928917/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9bb50bb52a32994440e6946f007b47286777685a6f041f2dd25b5ab928917/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:56 compute-0 podman[91072]: 2026-01-21 13:45:56.525243985 +0000 UTC m=+0.146221250 container init b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 13:45:56 compute-0 podman[91072]: 2026-01-21 13:45:56.53038778 +0000 UTC m=+0.151365025 container start b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:56 compute-0 podman[91072]: 2026-01-21 13:45:56.536461826 +0000 UTC m=+0.157439091 container attach b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:45:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 21 13:45:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 21 13:45:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 21 13:45:56 compute-0 ceph-mon[75031]: pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4007662532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 21 13:45:56 compute-0 ceph-mon[75031]: osdmap e29: 3 total, 3 up, 3 in
Jan 21 13:45:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 21 13:45:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 13:45:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 21 13:45:57 compute-0 lucid_almeida[91088]: enabled application 'rbd' on pool 'images'
Jan 21 13:45:57 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 21 13:45:57 compute-0 podman[91072]: 2026-01-21 13:45:57.020810587 +0000 UTC m=+0.641787792 container died b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:45:57 compute-0 systemd[1]: libpod-b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3.scope: Deactivated successfully.
Jan 21 13:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a9bb50bb52a32994440e6946f007b47286777685a6f041f2dd25b5ab928917-merged.mount: Deactivated successfully.
Jan 21 13:45:57 compute-0 podman[91072]: 2026-01-21 13:45:57.072470695 +0000 UTC m=+0.693447900 container remove b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3 (image=quay.io/ceph/ceph:v20, name=lucid_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:45:57 compute-0 systemd[1]: libpod-conmon-b68408829d90eb63ffa93de3f9416bee8b4613e49b0cbef2e0a7fbb26ea294d3.scope: Deactivated successfully.
Jan 21 13:45:57 compute-0 sudo[91069]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:57 compute-0 sudo[91149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvbnfezmmysglpxyyilatbkgtaafokp ; /usr/bin/python3'
Jan 21 13:45:57 compute-0 sudo[91149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:57 compute-0 python3[91151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:57 compute-0 podman[91152]: 2026-01-21 13:45:57.445690254 +0000 UTC m=+0.039613608 container create 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:45:57 compute-0 systemd[1]: Started libpod-conmon-6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7.scope.
Jan 21 13:45:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed8824be6801e88f908de58a65a43f79da871c9cdb7bf2981ad63c90c8d7ce8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed8824be6801e88f908de58a65a43f79da871c9cdb7bf2981ad63c90c8d7ce8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:57 compute-0 podman[91152]: 2026-01-21 13:45:57.428640093 +0000 UTC m=+0.022563477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:57 compute-0 podman[91152]: 2026-01-21 13:45:57.53214303 +0000 UTC m=+0.126066384 container init 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:45:57 compute-0 podman[91152]: 2026-01-21 13:45:57.538025303 +0000 UTC m=+0.131948667 container start 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 13:45:57 compute-0 podman[91152]: 2026-01-21 13:45:57.54202327 +0000 UTC m=+0.135946644 container attach 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:45:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 21 13:45:57 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 21 13:45:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 21 13:45:58 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2452711801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 21 13:45:58 compute-0 ceph-mon[75031]: osdmap e30: 3 total, 3 up, 3 in
Jan 21 13:45:58 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 21 13:45:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 13:45:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 21 13:45:58 compute-0 cranky_napier[91167]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 21 13:45:58 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 21 13:45:58 compute-0 systemd[1]: libpod-6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7.scope: Deactivated successfully.
Jan 21 13:45:58 compute-0 podman[91192]: 2026-01-21 13:45:58.256322802 +0000 UTC m=+0.027035625 container died 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 13:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ed8824be6801e88f908de58a65a43f79da871c9cdb7bf2981ad63c90c8d7ce8-merged.mount: Deactivated successfully.
Jan 21 13:45:58 compute-0 podman[91192]: 2026-01-21 13:45:58.289638425 +0000 UTC m=+0.060351228 container remove 6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7 (image=quay.io/ceph/ceph:v20, name=cranky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 13:45:58 compute-0 systemd[1]: libpod-conmon-6d641b689de534da0c7d5d31b050df5e2623f6743fa3902355e23c3e6136d8d7.scope: Deactivated successfully.
Jan 21 13:45:58 compute-0 sudo[91149]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:58 compute-0 sudo[91230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nudaqijvmdbjhocdeaeqbeiuzigtvfjf ; /usr/bin/python3'
Jan 21 13:45:58 compute-0 sudo[91230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:45:58 compute-0 python3[91232]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:45:58 compute-0 podman[91233]: 2026-01-21 13:45:58.701895787 +0000 UTC m=+0.058393101 container create 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:45:58 compute-0 systemd[1]: Started libpod-conmon-569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5.scope.
Jan 21 13:45:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e897a81b6f57b8428b25d495a1c6322c428411d3c3ab1d350510a0f1a6ef2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e897a81b6f57b8428b25d495a1c6322c428411d3c3ab1d350510a0f1a6ef2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:45:58 compute-0 podman[91233]: 2026-01-21 13:45:58.679440365 +0000 UTC m=+0.035937489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:45:58 compute-0 podman[91233]: 2026-01-21 13:45:58.778477066 +0000 UTC m=+0.134974170 container init 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 13:45:58 compute-0 podman[91233]: 2026-01-21 13:45:58.783752792 +0000 UTC m=+0.140249926 container start 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:45:58 compute-0 podman[91233]: 2026-01-21 13:45:58.787463842 +0000 UTC m=+0.143960936 container attach 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:45:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 21 13:45:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 21 13:45:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 21 13:45:59 compute-0 ceph-mon[75031]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:45:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2465980745' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 21 13:45:59 compute-0 ceph-mon[75031]: osdmap e31: 3 total, 3 up, 3 in
Jan 21 13:45:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 21 13:45:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 13:45:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 21 13:45:59 compute-0 funny_ellis[91248]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 21 13:45:59 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 21 13:45:59 compute-0 systemd[1]: libpod-569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5.scope: Deactivated successfully.
Jan 21 13:45:59 compute-0 podman[91233]: 2026-01-21 13:45:59.234893863 +0000 UTC m=+0.591390957 container died 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba4e897a81b6f57b8428b25d495a1c6322c428411d3c3ab1d350510a0f1a6ef2-merged.mount: Deactivated successfully.
Jan 21 13:45:59 compute-0 podman[91233]: 2026-01-21 13:45:59.273243428 +0000 UTC m=+0.629740522 container remove 569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5 (image=quay.io/ceph/ceph:v20, name=funny_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:45:59 compute-0 sudo[91230]: pam_unix(sudo:session): session closed for user root
Jan 21 13:45:59 compute-0 systemd[1]: libpod-conmon-569352baefc05979698d0c98d3a0266b4ff492daf419126d3871488e2e3265f5.scope: Deactivated successfully.
Jan 21 13:45:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:00 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2482580776' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 21 13:46:00 compute-0 ceph-mon[75031]: osdmap e32: 3 total, 3 up, 3 in
Jan 21 13:46:00 compute-0 python3[91360]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:46:00 compute-0 python3[91431]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003159.9659805-36598-104284940752309/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:46:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:01 compute-0 ceph-mon[75031]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:01 compute-0 sudo[91531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qifopzgztwvgnfypygxuqznnrjtdrgmx ; /usr/bin/python3'
Jan 21 13:46:01 compute-0 sudo[91531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:01 compute-0 python3[91533]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:46:01 compute-0 sudo[91531]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:01 compute-0 sudo[91606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcpfhprsfzyamxlugvkjwpbmsufjrer ; /usr/bin/python3'
Jan 21 13:46:01 compute-0 sudo[91606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:01 compute-0 python3[91608]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003161.094617-36612-233886849682918/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=75aa2a87d3a9bd957ba9f2b5be706649780713ad backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:46:01 compute-0 sudo[91606]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:02 compute-0 sudo[91656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quvosprftprkwnvebvabhqdgtkqgoxoq ; /usr/bin/python3'
Jan 21 13:46:02 compute-0 sudo[91656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:02 compute-0 python3[91658]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.263588952 +0000 UTC m=+0.043154653 container create 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:02 compute-0 systemd[1]: Started libpod-conmon-2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9.scope.
Jan 21 13:46:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.247056213 +0000 UTC m=+0.026621934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.345167391 +0000 UTC m=+0.124733142 container init 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.351878403 +0000 UTC m=+0.131444104 container start 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.354953177 +0000 UTC m=+0.134518928 container attach 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:46:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 21 13:46:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 13:46:02 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 13:46:02 compute-0 nifty_jemison[91674]: 
Jan 21 13:46:02 compute-0 nifty_jemison[91674]: [global]
Jan 21 13:46:02 compute-0 nifty_jemison[91674]:         fsid = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:46:02 compute-0 nifty_jemison[91674]:         mon_host = 192.168.122.100
Jan 21 13:46:02 compute-0 nifty_jemison[91674]:         rgw_keystone_api_version = 3
Jan 21 13:46:02 compute-0 systemd[1]: libpod-2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9.scope: Deactivated successfully.
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.773221504 +0000 UTC m=+0.552787205 container died 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-552ca93064c06b2a11d064ed82ce4117c64ce9678cec9abfcd7258d4db4b6db8-merged.mount: Deactivated successfully.
Jan 21 13:46:02 compute-0 podman[91659]: 2026-01-21 13:46:02.826635633 +0000 UTC m=+0.606201334 container remove 2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9 (image=quay.io/ceph/ceph:v20, name=nifty_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:02 compute-0 sudo[91699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:02 compute-0 sudo[91699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:02 compute-0 sudo[91699]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:02 compute-0 systemd[1]: libpod-conmon-2968cffcd28652f07334b05aa3a656eb9718e4883b1d75bfa8ad8a34425981b9.scope: Deactivated successfully.
Jan 21 13:46:02 compute-0 sudo[91656]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:02 compute-0 sudo[91736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:46:02 compute-0 sudo[91736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:03 compute-0 sudo[91784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suvrphaecghjewyceypcmrqulxcbjsdb ; /usr/bin/python3'
Jan 21 13:46:03 compute-0 sudo[91784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:03 compute-0 python3[91786]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:03 compute-0 ceph-mon[75031]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:03 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 21 13:46:03 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1327509013' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 21 13:46:03 compute-0 podman[91823]: 2026-01-21 13:46:03.250100554 +0000 UTC m=+0.043765507 container create 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:46:03 compute-0 systemd[1]: Started libpod-conmon-841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b.scope.
Jan 21 13:46:03 compute-0 podman[91837]: 2026-01-21 13:46:03.29836374 +0000 UTC m=+0.060310127 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:03 compute-0 podman[91823]: 2026-01-21 13:46:03.229442696 +0000 UTC m=+0.023107669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:03 compute-0 podman[91823]: 2026-01-21 13:46:03.334524583 +0000 UTC m=+0.128189566 container init 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 13:46:03 compute-0 podman[91823]: 2026-01-21 13:46:03.340346863 +0000 UTC m=+0.134011816 container start 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:03 compute-0 podman[91823]: 2026-01-21 13:46:03.343814786 +0000 UTC m=+0.137479739 container attach 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:46:03 compute-0 podman[91837]: 2026-01-21 13:46:03.402504783 +0000 UTC m=+0.164451200 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 21 13:46:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/188019200' entity='client.admin' 
Jan 21 13:46:03 compute-0 quizzical_hofstadter[91859]: set ssl_option
Jan 21 13:46:03 compute-0 systemd[1]: libpod-841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b.scope: Deactivated successfully.
Jan 21 13:46:03 compute-0 podman[91823]: 2026-01-21 13:46:03.961675732 +0000 UTC m=+0.755340695 container died 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1f19319e99ed3a056af60e79eea469f30a7a2d8bcd64a2fd8e7d3347e04af1d-merged.mount: Deactivated successfully.
Jan 21 13:46:04 compute-0 podman[91823]: 2026-01-21 13:46:04.011083224 +0000 UTC m=+0.804748207 container remove 841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b (image=quay.io/ceph/ceph:v20, name=quizzical_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 13:46:04 compute-0 systemd[1]: libpod-conmon-841bd1c0f7bec9863c83a358be7a83171e1fc3763cd5f6b840a7faf0f10d0b4b.scope: Deactivated successfully.
Jan 21 13:46:04 compute-0 sudo[91784]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:04 compute-0 sudo[91736]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 sudo[92049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaniftdxkdxipyubptazoobopgvztynp ; /usr/bin/python3'
Jan 21 13:46:04 compute-0 sudo[92049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:04 compute-0 sudo[92048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:04 compute-0 sudo[92048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:04 compute-0 sudo[92048]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:04 compute-0 sudo[92076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:46:04 compute-0 sudo[92076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:04 compute-0 python3[92063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.433732016 +0000 UTC m=+0.045728885 container create 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:04 compute-0 systemd[1]: Started libpod-conmon-59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9.scope.
Jan 21 13:46:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.413813415 +0000 UTC m=+0.025810334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.518759778 +0000 UTC m=+0.130756697 container init 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.527668093 +0000 UTC m=+0.139665002 container start 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.531813134 +0000 UTC m=+0.143810023 container attach 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:46:04 compute-0 sudo[92076]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:46:04 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 21 13:46:04 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 21 13:46:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 13:46:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 flamboyant_payne[92117]: Scheduled rgw.rgw update...
Jan 21 13:46:04 compute-0 ceph-mon[75031]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/188019200' entity='client.admin' 
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:04 compute-0 systemd[1]: libpod-59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9.scope: Deactivated successfully.
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.951823312 +0000 UTC m=+0.563820211 container died 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:04 compute-0 sudo[92169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:04 compute-0 sudo[92169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:04 compute-0 sudo[92169]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b33e89a4271820b6b6b4f0c028f5be57c08847657ec37dc1a341fb9597b97be9-merged.mount: Deactivated successfully.
Jan 21 13:46:04 compute-0 podman[92101]: 2026-01-21 13:46:04.997376761 +0000 UTC m=+0.609373630 container remove 59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9 (image=quay.io/ceph/ceph:v20, name=flamboyant_payne, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:05 compute-0 systemd[1]: libpod-conmon-59cb3f8660753c8747187d1ed0e0c35c6d3af29c534720135f1fbae8d6fd80a9.scope: Deactivated successfully.
Jan 21 13:46:05 compute-0 sudo[92049]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:05 compute-0 sudo[92207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:46:05 compute-0 sudo[92207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.324650391 +0000 UTC m=+0.049727551 container create fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:05 compute-0 systemd[1]: Started libpod-conmon-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope.
Jan 21 13:46:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.303551412 +0000 UTC m=+0.028628632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.407584543 +0000 UTC m=+0.132661723 container init fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.418740422 +0000 UTC m=+0.143817582 container start fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 13:46:05 compute-0 modest_hellman[92260]: 167 167
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.423419866 +0000 UTC m=+0.148497036 container attach fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:46:05 compute-0 systemd[1]: libpod-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope: Deactivated successfully.
Jan 21 13:46:05 compute-0 conmon[92260]: conmon fb094c73159caa40d79d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope/container/memory.events
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.425406373 +0000 UTC m=+0.150483543 container died fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-814dd41e7dfe4ab1e81c1ff5d5ce127b50dbb323c0ac52c0dd601c08ea05ad35-merged.mount: Deactivated successfully.
Jan 21 13:46:05 compute-0 podman[92244]: 2026-01-21 13:46:05.462786466 +0000 UTC m=+0.187863626 container remove fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:05 compute-0 systemd[1]: libpod-conmon-fb094c73159caa40d79db9c840c08981962ffed2b7b2a5c826205b41eece27b8.scope: Deactivated successfully.
Jan 21 13:46:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:05 compute-0 podman[92284]: 2026-01-21 13:46:05.619058608 +0000 UTC m=+0.039514965 container create 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:05 compute-0 systemd[1]: Started libpod-conmon-411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294.scope.
Jan 21 13:46:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:05 compute-0 podman[92284]: 2026-01-21 13:46:05.601973745 +0000 UTC m=+0.022430122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:05 compute-0 podman[92284]: 2026-01-21 13:46:05.6991355 +0000 UTC m=+0.119591877 container init 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:05 compute-0 podman[92284]: 2026-01-21 13:46:05.705250319 +0000 UTC m=+0.125706676 container start 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:46:05 compute-0 podman[92284]: 2026-01-21 13:46:05.708125398 +0000 UTC m=+0.128581755 container attach 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:46:05 compute-0 ceph-mon[75031]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:46:05 compute-0 ceph-mon[75031]: Saving service rgw.rgw spec with placement compute-0
Jan 21 13:46:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:06 compute-0 python3[92388]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:46:06 compute-0 flamboyant_blackwell[92301]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:46:06 compute-0 flamboyant_blackwell[92301]: --> All data devices are unavailable
Jan 21 13:46:06 compute-0 systemd[1]: libpod-411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294.scope: Deactivated successfully.
Jan 21 13:46:06 compute-0 podman[92284]: 2026-01-21 13:46:06.20203129 +0000 UTC m=+0.622487647 container died 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:46:06 compute-0 python3[92478]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003165.8595064-36653-261769494656179/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:46:06 compute-0 sudo[92526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkjsefnpcqlljffgxgvmsqbvgmjpnnmq ; /usr/bin/python3'
Jan 21 13:46:06 compute-0 sudo[92526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:07 compute-0 python3[92528]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:07 compute-0 ceph-mon[75031]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e5431cb223ae795583b62e8780c9809fbeb5fc5499cc59c43e27bddc12f072-merged.mount: Deactivated successfully.
Jan 21 13:46:07 compute-0 podman[92284]: 2026-01-21 13:46:07.138860633 +0000 UTC m=+1.559316980 container remove 411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:07 compute-0 systemd[1]: libpod-conmon-411ebc9f75dcc395465799190b737bb673816178e3d61cade3551f2165696294.scope: Deactivated successfully.
Jan 21 13:46:07 compute-0 sudo[92207]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:07 compute-0 sudo[92543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:07 compute-0 sudo[92543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:07 compute-0 sudo[92543]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:07 compute-0 podman[92529]: 2026-01-21 13:46:07.159628234 +0000 UTC m=+0.131158607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:07 compute-0 sudo[92568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:46:07 compute-0 sudo[92568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:07 compute-0 podman[92529]: 2026-01-21 13:46:07.345755437 +0000 UTC m=+0.317285780 container create bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:07 compute-0 systemd[1]: Started libpod-conmon-bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b.scope.
Jan 21 13:46:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:07 compute-0 podman[92529]: 2026-01-21 13:46:07.466549773 +0000 UTC m=+0.438080206 container init bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:07 compute-0 podman[92529]: 2026-01-21 13:46:07.473357197 +0000 UTC m=+0.444887540 container start bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:07 compute-0 podman[92529]: 2026-01-21 13:46:07.477706432 +0000 UTC m=+0.449236785 container attach bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:46:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.573843362 +0000 UTC m=+0.044770542 container create 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:07 compute-0 systemd[1]: Started libpod-conmon-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope.
Jan 21 13:46:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.640140733 +0000 UTC m=+0.111067943 container init 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.648512195 +0000 UTC m=+0.119439375 container start 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.556489374 +0000 UTC m=+0.027416574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:07 compute-0 infallible_wright[92643]: 167 167
Jan 21 13:46:07 compute-0 systemd[1]: libpod-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope: Deactivated successfully.
Jan 21 13:46:07 compute-0 conmon[92643]: conmon 7a66a71644e3356b1e80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope/container/memory.events
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.656016646 +0000 UTC m=+0.126943826 container attach 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.656471657 +0000 UTC m=+0.127398837 container died 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d770cfef0926bba36371ee8250bdc64a654be666cb21ef44dcab2c2759398cfc-merged.mount: Deactivated successfully.
Jan 21 13:46:07 compute-0 podman[92609]: 2026-01-21 13:46:07.841903943 +0000 UTC m=+0.312831143 container remove 7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:07 compute-0 systemd[1]: libpod-conmon-7a66a71644e3356b1e807a813d45b41a182a00524efebd7b60e3fe458e09de44.scope: Deactivated successfully.
Jan 21 13:46:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:46:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 13:46:07 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0[75027]: 2026-01-21T13:46:07.954+0000 7f821bd3a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e2 new map
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-21T13:46:07:955917+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T13:46:07.955594+0000
                                           modified        2026-01-21T13:46:07.955594+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 21 13:46:07 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 21 13:46:07 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 21 13:46:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 13:46:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 21 13:46:08 compute-0 systemd[1]: libpod-bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b.scope: Deactivated successfully.
Jan 21 13:46:08 compute-0 podman[92529]: 2026-01-21 13:46:08.003388571 +0000 UTC m=+0.974918914 container died bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:08.016895077 +0000 UTC m=+0.050166032 container create 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba801e7969df5b75e9f9018e0a09090fc7b10a7cc738d42256afe03288fd222f-merged.mount: Deactivated successfully.
Jan 21 13:46:08 compute-0 podman[92529]: 2026-01-21 13:46:08.043379707 +0000 UTC m=+1.014910070 container remove bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b (image=quay.io/ceph/ceph:v20, name=eager_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:46:08 compute-0 sudo[92526]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:07.99130356 +0000 UTC m=+0.024574515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:08 compute-0 systemd[1]: Started libpod-conmon-6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c.scope.
Jan 21 13:46:08 compute-0 systemd[1]: libpod-conmon-bc7cc20e28fa25c430d5d5e4b3a1029e769568e7c384dd1835ee7871d5b7851b.scope: Deactivated successfully.
Jan 21 13:46:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 21 13:46:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 21 13:46:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 21 13:46:08 compute-0 ceph-mon[75031]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 21 13:46:08 compute-0 ceph-mon[75031]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 21 13:46:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 21 13:46:08 compute-0 ceph-mon[75031]: osdmap e33: 3 total, 3 up, 3 in
Jan 21 13:46:08 compute-0 ceph-mon[75031]: fsmap cephfs:0
Jan 21 13:46:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:08.127213781 +0000 UTC m=+0.160484726 container init 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:08.141933285 +0000 UTC m=+0.175204280 container start 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:08.14664262 +0000 UTC m=+0.179913605 container attach 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:46:08 compute-0 sudo[92727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbcsnbkcygyzvadxwuvqkbdwlxvycymx ; /usr/bin/python3'
Jan 21 13:46:08 compute-0 sudo[92727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:08 compute-0 python3[92729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:08 compute-0 brave_murdock[92699]: {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:     "0": [
Jan 21 13:46:08 compute-0 brave_murdock[92699]:         {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "devices": [
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "/dev/loop3"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             ],
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_name": "ceph_lv0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_size": "21470642176",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "name": "ceph_lv0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "tags": {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.crush_device_class": "",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.encrypted": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osd_id": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.type": "block",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.vdo": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.with_tpm": "0"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             },
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "type": "block",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "vg_name": "ceph_vg0"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:         }
Jan 21 13:46:08 compute-0 brave_murdock[92699]:     ],
Jan 21 13:46:08 compute-0 brave_murdock[92699]:     "1": [
Jan 21 13:46:08 compute-0 brave_murdock[92699]:         {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "devices": [
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "/dev/loop4"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             ],
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_name": "ceph_lv1",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_size": "21470642176",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "name": "ceph_lv1",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "tags": {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.crush_device_class": "",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.encrypted": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osd_id": "1",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.type": "block",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.vdo": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.with_tpm": "0"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             },
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "type": "block",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "vg_name": "ceph_vg1"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:         }
Jan 21 13:46:08 compute-0 brave_murdock[92699]:     ],
Jan 21 13:46:08 compute-0 brave_murdock[92699]:     "2": [
Jan 21 13:46:08 compute-0 brave_murdock[92699]:         {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "devices": [
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "/dev/loop5"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             ],
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_name": "ceph_lv2",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_size": "21470642176",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "name": "ceph_lv2",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "tags": {
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.crush_device_class": "",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.encrypted": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osd_id": "2",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.type": "block",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.vdo": "0",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:                 "ceph.with_tpm": "0"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             },
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "type": "block",
Jan 21 13:46:08 compute-0 brave_murdock[92699]:             "vg_name": "ceph_vg2"
Jan 21 13:46:08 compute-0 brave_murdock[92699]:         }
Jan 21 13:46:08 compute-0 brave_murdock[92699]:     ]
Jan 21 13:46:08 compute-0 brave_murdock[92699]: }
Jan 21 13:46:08 compute-0 podman[92734]: 2026-01-21 13:46:08.445722579 +0000 UTC m=+0.051385582 container create 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:46:08 compute-0 systemd[1]: libpod-6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c.scope: Deactivated successfully.
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:08.463255702 +0000 UTC m=+0.496526687 container died 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:08 compute-0 systemd[1]: Started libpod-conmon-65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c.scope.
Jan 21 13:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-898d7e15662df17694a71c15377a33441c5ac7dc532a6636ea4178c74441fd24-merged.mount: Deactivated successfully.
Jan 21 13:46:08 compute-0 podman[92734]: 2026-01-21 13:46:08.421041812 +0000 UTC m=+0.026704825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:08 compute-0 podman[92670]: 2026-01-21 13:46:08.520057503 +0000 UTC m=+0.553328458 container remove 6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_murdock, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:08 compute-0 systemd[1]: libpod-conmon-6269f1f49c3dbbe477c3e1fc0bd3a4468753a21ac809fccbc0abbded78b3927c.scope: Deactivated successfully.
Jan 21 13:46:08 compute-0 podman[92734]: 2026-01-21 13:46:08.549521544 +0000 UTC m=+0.155184557 container init 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:08 compute-0 podman[92734]: 2026-01-21 13:46:08.558593504 +0000 UTC m=+0.164256477 container start 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 13:46:08 compute-0 podman[92734]: 2026-01-21 13:46:08.562149799 +0000 UTC m=+0.167812772 container attach 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 21 13:46:08 compute-0 sudo[92568]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:08 compute-0 sudo[92769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:08 compute-0 sudo[92769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:08 compute-0 sudo[92769]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:08 compute-0 sudo[92803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:46:08 compute-0 sudo[92803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:46:08 compute-0 ceph-mgr[75322]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 21 13:46:08 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 21 13:46:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 13:46:08 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:09 compute-0 gallant_yonath[92758]: Scheduled mds.cephfs update...
Jan 21 13:46:09 compute-0 systemd[1]: libpod-65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c.scope: Deactivated successfully.
Jan 21 13:46:09 compute-0 podman[92734]: 2026-01-21 13:46:09.020408831 +0000 UTC m=+0.626071794 container died 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.031657313 +0000 UTC m=+0.048042352 container create 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cebe12efe6b10901420571dc7a7ba643b04748d1e5f57e0898b4cd16b156f614-merged.mount: Deactivated successfully.
Jan 21 13:46:09 compute-0 podman[92734]: 2026-01-21 13:46:09.062183519 +0000 UTC m=+0.667846482 container remove 65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c (image=quay.io/ceph/ceph:v20, name=gallant_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 13:46:09 compute-0 systemd[1]: Started libpod-conmon-175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff.scope.
Jan 21 13:46:09 compute-0 systemd[1]: libpod-conmon-65bea75235379929f675d5f636674b2cde018904df41077776d8546bc5606a5c.scope: Deactivated successfully.
Jan 21 13:46:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:09 compute-0 sudo[92727]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.091037606 +0000 UTC m=+0.107422655 container init 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.096910217 +0000 UTC m=+0.113295256 container start 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:46:09 compute-0 lucid_lewin[92882]: 167 167
Jan 21 13:46:09 compute-0 systemd[1]: libpod-175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff.scope: Deactivated successfully.
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.10031027 +0000 UTC m=+0.116695409 container attach 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.100877283 +0000 UTC m=+0.117262322 container died 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:09 compute-0 ceph-mon[75031]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:09 compute-0 ceph-mon[75031]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:46:09 compute-0 ceph-mon[75031]: Saving service mds.cephfs spec with placement compute-0
Jan 21 13:46:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.01248138 +0000 UTC m=+0.028866459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65a3f83f9185fc129f2a72939a8b2e88253125228d2fafa05c897a8d396399c-merged.mount: Deactivated successfully.
Jan 21 13:46:09 compute-0 podman[92851]: 2026-01-21 13:46:09.140489189 +0000 UTC m=+0.156874228 container remove 175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:09 compute-0 systemd[1]: libpod-conmon-175a8a90da716d3f20b9e4d19ff57504f02e9407772353ba35375ab6631e90ff.scope: Deactivated successfully.
Jan 21 13:46:09 compute-0 podman[92909]: 2026-01-21 13:46:09.339902453 +0000 UTC m=+0.068082575 container create 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:09 compute-0 systemd[1]: Started libpod-conmon-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope.
Jan 21 13:46:09 compute-0 podman[92909]: 2026-01-21 13:46:09.316121459 +0000 UTC m=+0.044301601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:09 compute-0 podman[92909]: 2026-01-21 13:46:09.438276988 +0000 UTC m=+0.166457120 container init 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:09 compute-0 podman[92909]: 2026-01-21 13:46:09.451999939 +0000 UTC m=+0.180180051 container start 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:09 compute-0 podman[92909]: 2026-01-21 13:46:09.455584645 +0000 UTC m=+0.183764757 container attach 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:10 compute-0 ceph-mon[75031]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 13:46:10 compute-0 ceph-mon[75031]: Saving service mds.cephfs spec with placement compute-0
Jan 21 13:46:10 compute-0 lvm[93056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:46:10 compute-0 lvm[93057]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:46:10 compute-0 lvm[93057]: VG ceph_vg1 finished
Jan 21 13:46:10 compute-0 lvm[93056]: VG ceph_vg0 finished
Jan 21 13:46:10 compute-0 lvm[93077]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:46:10 compute-0 lvm[93077]: VG ceph_vg2 finished
Jan 21 13:46:10 compute-0 sudo[93083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqyxzwaxbogzyrdiuxifbnezdbumdxlq ; /usr/bin/python3'
Jan 21 13:46:10 compute-0 sudo[93083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:10 compute-0 compassionate_feynman[92926]: {}
Jan 21 13:46:10 compute-0 systemd[1]: libpod-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope: Deactivated successfully.
Jan 21 13:46:10 compute-0 podman[92909]: 2026-01-21 13:46:10.354239518 +0000 UTC m=+1.082419640 container died 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:10 compute-0 systemd[1]: libpod-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope: Consumed 1.405s CPU time.
Jan 21 13:46:10 compute-0 python3[93085]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 21 13:46:10 compute-0 sudo[93083]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85d4bd3677c1f9f3c7f897e971a10835501d3f1b9e273569f5b15111f6ed5ce-merged.mount: Deactivated successfully.
Jan 21 13:46:10 compute-0 podman[92909]: 2026-01-21 13:46:10.406778466 +0000 UTC m=+1.134958588 container remove 64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feynman, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:10 compute-0 systemd[1]: libpod-conmon-64a8010429899e592c7f57c86a3c32c92723f35bb95d68a8b768592b4f868ef8.scope: Deactivated successfully.
Jan 21 13:46:10 compute-0 sudo[92803]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:10 compute-0 sudo[93123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:46:10 compute-0 sudo[93123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:10 compute-0 sudo[93123]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:10 compute-0 sudo[93172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:10 compute-0 sudo[93172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:10 compute-0 sudo[93172]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:10 compute-0 sudo[93220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uorkzuahofceuxgpmabpznzcuzjshmbi ; /usr/bin/python3'
Jan 21 13:46:10 compute-0 sudo[93220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:10 compute-0 sudo[93221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:46:10 compute-0 sudo[93221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:10 compute-0 python3[93231]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003170.0899923-36705-144937932674893/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=01672c665cebe1978e709c2eff9d48fb31c7992e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:46:10 compute-0 sudo[93220]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:46:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:46:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:46:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:46:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:46:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:46:11 compute-0 podman[93314]: 2026-01-21 13:46:11.07020195 +0000 UTC m=+0.073425573 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 13:46:11 compute-0 sudo[93357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzilkstieqzqnnyzzzyfihumpbahmmjy ; /usr/bin/python3'
Jan 21 13:46:11 compute-0 sudo[93357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:11 compute-0 podman[93314]: 2026-01-21 13:46:11.190198696 +0000 UTC m=+0.193422299 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:11 compute-0 python3[93359]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:11 compute-0 podman[93384]: 2026-01-21 13:46:11.31546042 +0000 UTC m=+0.044842553 container create 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 13:46:11 compute-0 systemd[1]: Started libpod-conmon-126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd.scope.
Jan 21 13:46:11 compute-0 podman[93384]: 2026-01-21 13:46:11.292461275 +0000 UTC m=+0.021843438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb64b7913aac57991956283d775783ad55d15e9e8f6c688974b7349644d163c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bb64b7913aac57991956283d775783ad55d15e9e8f6c688974b7349644d163c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:11 compute-0 podman[93384]: 2026-01-21 13:46:11.405844362 +0000 UTC m=+0.135226525 container init 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:11 compute-0 podman[93384]: 2026-01-21 13:46:11.413334473 +0000 UTC m=+0.142716606 container start 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:46:11 compute-0 podman[93384]: 2026-01-21 13:46:11.416614872 +0000 UTC m=+0.145997005 container attach 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 13:46:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:11 compute-0 sudo[93221]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:11 compute-0 systemd[1]: libpod-126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd.scope: Deactivated successfully.
Jan 21 13:46:11 compute-0 podman[93384]: 2026-01-21 13:46:11.987740018 +0000 UTC m=+0.717122161 container died 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:12 compute-0 sudo[93527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:12 compute-0 sudo[93527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:12 compute-0 sudo[93527]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:12 compute-0 sudo[93560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:46:12 compute-0 sudo[93560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bb64b7913aac57991956283d775783ad55d15e9e8f6c688974b7349644d163c-merged.mount: Deactivated successfully.
Jan 21 13:46:12 compute-0 podman[93384]: 2026-01-21 13:46:12.396981126 +0000 UTC m=+1.126363259 container remove 126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd (image=quay.io/ceph/ceph:v20, name=distracted_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:12 compute-0 systemd[1]: libpod-conmon-126eb2e9c975817a7713b23af7c2b2b83485a42d520f6aa927eb1c18d408ccdd.scope: Deactivated successfully.
Jan 21 13:46:12 compute-0 sudo[93357]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.485215096 +0000 UTC m=+0.058918713 container create d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:12 compute-0 systemd[1]: Started libpod-conmon-d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41.scope.
Jan 21 13:46:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.464303571 +0000 UTC m=+0.038007238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.861671373 +0000 UTC m=+0.435375030 container init d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.872869023 +0000 UTC m=+0.446572630 container start d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.876612214 +0000 UTC m=+0.450315821 container attach d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:12 compute-0 quizzical_mcnulty[93615]: 167 167
Jan 21 13:46:12 compute-0 systemd[1]: libpod-d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41.scope: Deactivated successfully.
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.87809957 +0000 UTC m=+0.451803177 container died d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 21 13:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff3ee6ba9fc1202db121b3cad2274f64e0dd4fdeea512778a37277f33ee4d79-merged.mount: Deactivated successfully.
Jan 21 13:46:12 compute-0 podman[93599]: 2026-01-21 13:46:12.918439734 +0000 UTC m=+0.492143351 container remove d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:46:12 compute-0 ceph-mon[75031]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1487634279' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:12 compute-0 systemd[1]: libpod-conmon-d9a530f39758d71dfed07a076070ae393335e3c0c9ab87a5cc80fe60b5b56d41.scope: Deactivated successfully.
Jan 21 13:46:13 compute-0 sudo[93676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duomrslupovdkimgplcahbyekaxzkljw ; /usr/bin/python3'
Jan 21 13:46:13 compute-0 sudo[93676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:13 compute-0 podman[93639]: 2026-01-21 13:46:13.054830765 +0000 UTC m=+0.021380006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:13 compute-0 podman[93639]: 2026-01-21 13:46:13.173707385 +0000 UTC m=+0.140256606 container create 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:46:13 compute-0 systemd[1]: Started libpod-conmon-3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4.scope.
Jan 21 13:46:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 python3[93678]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 podman[93639]: 2026-01-21 13:46:13.28328344 +0000 UTC m=+0.249832741 container init 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:13 compute-0 podman[93639]: 2026-01-21 13:46:13.292154515 +0000 UTC m=+0.258703736 container start 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:46:13 compute-0 podman[93639]: 2026-01-21 13:46:13.295538426 +0000 UTC m=+0.262087687 container attach 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.320353245 +0000 UTC m=+0.043444570 container create 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:13 compute-0 systemd[1]: Started libpod-conmon-4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a.scope.
Jan 21 13:46:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def8c6eadf554d69743d2d159c0eb5b191a552cb3e985fcb6fab4ef96037ab94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def8c6eadf554d69743d2d159c0eb5b191a552cb3e985fcb6fab4ef96037ab94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.375787343 +0000 UTC m=+0.098878668 container init 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.383816767 +0000 UTC m=+0.106908092 container start 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.386931472 +0000 UTC m=+0.110022797 container attach 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.304056531 +0000 UTC m=+0.027147886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:13 compute-0 amazing_noyce[93683]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:46:13 compute-0 amazing_noyce[93683]: --> All data devices are unavailable
Jan 21 13:46:13 compute-0 systemd[1]: libpod-3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4.scope: Deactivated successfully.
Jan 21 13:46:13 compute-0 podman[93743]: 2026-01-21 13:46:13.796396056 +0000 UTC m=+0.024498383 container died 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:46:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-312138390c70cc5785276e9bc84df4980f58fc93fece52123be6cd590ea376db-merged.mount: Deactivated successfully.
Jan 21 13:46:13 compute-0 podman[93743]: 2026-01-21 13:46:13.838650216 +0000 UTC m=+0.066752533 container remove 3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_noyce, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:46:13 compute-0 systemd[1]: libpod-conmon-3d96b111f444e05090bb71191ae0b9e342f15b81c4444d64b8fafcbbe3053eb4.scope: Deactivated successfully.
Jan 21 13:46:13 compute-0 sudo[93560]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 13:46:13 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263255916' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:46:13 compute-0 hardcore_clarke[93705]: 
Jan 21 13:46:13 compute-0 hardcore_clarke[93705]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1769003143,"num_in_osds":3,"osd_in_since":1769003119,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83943424,"bytes_avail":64327983104,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-21T13:46:07:955917+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T13:45:41.522372+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 21 13:46:13 compute-0 systemd[1]: libpod-4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a.scope: Deactivated successfully.
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.923767451 +0000 UTC m=+0.646858816 container died 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 13:46:13 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3263255916' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:46:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-def8c6eadf554d69743d2d159c0eb5b191a552cb3e985fcb6fab4ef96037ab94-merged.mount: Deactivated successfully.
Jan 21 13:46:13 compute-0 sudo[93759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:13 compute-0 sudo[93759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:13 compute-0 sudo[93759]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:13 compute-0 podman[93687]: 2026-01-21 13:46:13.972841805 +0000 UTC m=+0.695933130 container remove 4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a (image=quay.io/ceph/ceph:v20, name=hardcore_clarke, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:13 compute-0 systemd[1]: libpod-conmon-4446fc971c23402db12a1196f9556976160517019ef0e4948dbab5421e3dd31a.scope: Deactivated successfully.
Jan 21 13:46:13 compute-0 sudo[93676]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:14 compute-0 sudo[93795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:46:14 compute-0 sudo[93795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:14 compute-0 sudo[93843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyfvofbtyessscqijlatzlkswckbasa ; /usr/bin/python3'
Jan 21 13:46:14 compute-0 sudo[93843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:14 compute-0 python3[93845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.30614258 +0000 UTC m=+0.038761977 container create 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.322834323 +0000 UTC m=+0.044685459 container create e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:46:14 compute-0 systemd[1]: Started libpod-conmon-7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f.scope.
Jan 21 13:46:14 compute-0 systemd[1]: Started libpod-conmon-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope.
Jan 21 13:46:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba4c65322ce91e4fbb2a816812d74c672dec6deb04b0f3bb84ad8bb74155452/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba4c65322ce91e4fbb2a816812d74c672dec6deb04b0f3bb84ad8bb74155452/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.381772916 +0000 UTC m=+0.114392333 container init 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.288344191 +0000 UTC m=+0.020963608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.388521789 +0000 UTC m=+0.121141186 container start 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.391301556 +0000 UTC m=+0.113152702 container init e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.39519832 +0000 UTC m=+0.127817717 container attach 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.396310907 +0000 UTC m=+0.118162043 container start e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.300930954 +0000 UTC m=+0.022782100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:14 compute-0 systemd[1]: libpod-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope: Deactivated successfully.
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.400139459 +0000 UTC m=+0.121990615 container attach e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:46:14 compute-0 confident_greider[93892]: 167 167
Jan 21 13:46:14 compute-0 conmon[93892]: conmon e78d0ef3776de640a1fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope/container/memory.events
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.401127533 +0000 UTC m=+0.122978669 container died e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 13:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc06cd2250586836eadeac001f62cf88c17182fc962171ec81fb3d41b499b092-merged.mount: Deactivated successfully.
Jan 21 13:46:14 compute-0 podman[93860]: 2026-01-21 13:46:14.433682279 +0000 UTC m=+0.155533405 container remove e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:14 compute-0 systemd[1]: libpod-conmon-e78d0ef3776de640a1fc6b6d8065448df97c6b89617fc18907a4732ca36285e3.scope: Deactivated successfully.
Jan 21 13:46:14 compute-0 podman[93935]: 2026-01-21 13:46:14.600207309 +0000 UTC m=+0.045842908 container create 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 13:46:14 compute-0 systemd[1]: Started libpod-conmon-1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d.scope.
Jan 21 13:46:14 compute-0 podman[93935]: 2026-01-21 13:46:14.581156129 +0000 UTC m=+0.026791758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:14 compute-0 podman[93935]: 2026-01-21 13:46:14.718720949 +0000 UTC m=+0.164356558 container init 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:14 compute-0 podman[93935]: 2026-01-21 13:46:14.730507934 +0000 UTC m=+0.176143523 container start 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:46:14 compute-0 podman[93935]: 2026-01-21 13:46:14.734604353 +0000 UTC m=+0.180239992 container attach 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 21 13:46:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 13:46:14 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2177793343' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 13:46:14 compute-0 hopeful_joliot[93890]: 
Jan 21 13:46:14 compute-0 hopeful_joliot[93890]: {"epoch":1,"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","modified":"2026-01-21T13:44:16.665097Z","created":"2026-01-21T13:44:16.665097Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 21 13:46:14 compute-0 hopeful_joliot[93890]: dumped monmap epoch 1
Jan 21 13:46:14 compute-0 systemd[1]: libpod-7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f.scope: Deactivated successfully.
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.894012051 +0000 UTC m=+0.626631458 container died 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba4c65322ce91e4fbb2a816812d74c672dec6deb04b0f3bb84ad8bb74155452-merged.mount: Deactivated successfully.
Jan 21 13:46:14 compute-0 podman[93858]: 2026-01-21 13:46:14.935302687 +0000 UTC m=+0.667922084 container remove 7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f (image=quay.io/ceph/ceph:v20, name=hopeful_joliot, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:46:14 compute-0 ceph-mon[75031]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:14 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2177793343' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 13:46:14 compute-0 systemd[1]: libpod-conmon-7741ada34cfe7b563d4ef4b4b443c90a58641ca213c3e9ed88586d5724eef97f.scope: Deactivated successfully.
Jan 21 13:46:14 compute-0 sudo[93843]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:15 compute-0 admiring_williamson[93952]: {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:     "0": [
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:         {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "devices": [
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "/dev/loop3"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             ],
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_name": "ceph_lv0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_size": "21470642176",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "name": "ceph_lv0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "tags": {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.crush_device_class": "",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.encrypted": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osd_id": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.type": "block",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.vdo": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.with_tpm": "0"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             },
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "type": "block",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "vg_name": "ceph_vg0"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:         }
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:     ],
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:     "1": [
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:         {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "devices": [
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "/dev/loop4"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             ],
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_name": "ceph_lv1",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_size": "21470642176",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "name": "ceph_lv1",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "tags": {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.crush_device_class": "",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.encrypted": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osd_id": "1",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.type": "block",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.vdo": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.with_tpm": "0"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             },
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "type": "block",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "vg_name": "ceph_vg1"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:         }
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:     ],
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:     "2": [
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:         {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "devices": [
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "/dev/loop5"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             ],
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_name": "ceph_lv2",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_size": "21470642176",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "name": "ceph_lv2",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "tags": {
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.crush_device_class": "",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.encrypted": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osd_id": "2",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.type": "block",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.vdo": "0",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:                 "ceph.with_tpm": "0"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             },
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "type": "block",
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:             "vg_name": "ceph_vg2"
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:         }
Jan 21 13:46:15 compute-0 admiring_williamson[93952]:     ]
Jan 21 13:46:15 compute-0 admiring_williamson[93952]: }
Jan 21 13:46:15 compute-0 systemd[1]: libpod-1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d.scope: Deactivated successfully.
Jan 21 13:46:15 compute-0 podman[93976]: 2026-01-21 13:46:15.157209684 +0000 UTC m=+0.045553771 container died 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6824ec5ecbbafef523fab5ee55a060c3f0c721a419ae9d565d2269a597ba2fb-merged.mount: Deactivated successfully.
Jan 21 13:46:15 compute-0 podman[93976]: 2026-01-21 13:46:15.214328502 +0000 UTC m=+0.102672519 container remove 1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_williamson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:46:15 compute-0 systemd[1]: libpod-conmon-1a0b562dbde21e733fbaa287193a90db34d39bcd374521c25fb0c02c248e859d.scope: Deactivated successfully.
Jan 21 13:46:15 compute-0 sudo[93795]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:15 compute-0 sudo[93991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:15 compute-0 sudo[93991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:15 compute-0 sudo[93991]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:15 compute-0 sudo[94016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:46:15 compute-0 sudo[94016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:15 compute-0 sudo[94062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uefsemwsvqcuzdxafrdetasnbvovsxoi ; /usr/bin/python3'
Jan 21 13:46:15 compute-0 sudo[94062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:15 compute-0 python3[94066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:15 compute-0 podman[94067]: 2026-01-21 13:46:15.628053359 +0000 UTC m=+0.055819338 container create feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:15 compute-0 systemd[1]: Started libpod-conmon-feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3.scope.
Jan 21 13:46:15 compute-0 podman[94067]: 2026-01-21 13:46:15.609528543 +0000 UTC m=+0.037294532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657857baff19e45638555ec52ccb121b04459bb9d853262e165174a2c5ada54f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657857baff19e45638555ec52ccb121b04459bb9d853262e165174a2c5ada54f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.736659641 +0000 UTC m=+0.059758243 container create 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:46:15 compute-0 podman[94067]: 2026-01-21 13:46:15.741972299 +0000 UTC m=+0.169738298 container init feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:15 compute-0 podman[94067]: 2026-01-21 13:46:15.748006605 +0000 UTC m=+0.175772604 container start feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:46:15 compute-0 podman[94067]: 2026-01-21 13:46:15.754756677 +0000 UTC m=+0.182522686 container attach feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:15 compute-0 systemd[1]: Started libpod-conmon-5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af.scope.
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.707723973 +0000 UTC m=+0.030822635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.831431508 +0000 UTC m=+0.154530170 container init 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.837990797 +0000 UTC m=+0.161089399 container start 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.842394763 +0000 UTC m=+0.165493335 container attach 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 13:46:15 compute-0 sharp_borg[94114]: 167 167
Jan 21 13:46:15 compute-0 systemd[1]: libpod-5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af.scope: Deactivated successfully.
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.843230554 +0000 UTC m=+0.166329126 container died 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-faf1ff166ea645835e67635df657787ef08fe5945854fc00dd993a9ede40a20d-merged.mount: Deactivated successfully.
Jan 21 13:46:15 compute-0 podman[94095]: 2026-01-21 13:46:15.89653564 +0000 UTC m=+0.219634242 container remove 5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:15 compute-0 systemd[1]: libpod-conmon-5ced89653c736998dc59b4b483b58a4ee7fba1fca477bad92fe9122295f032af.scope: Deactivated successfully.
Jan 21 13:46:16 compute-0 podman[94155]: 2026-01-21 13:46:16.075941501 +0000 UTC m=+0.038578873 container create 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:16 compute-0 systemd[1]: Started libpod-conmon-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope.
Jan 21 13:46:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:16 compute-0 podman[94155]: 2026-01-21 13:46:16.058039359 +0000 UTC m=+0.020676721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:16 compute-0 podman[94155]: 2026-01-21 13:46:16.157718035 +0000 UTC m=+0.120355427 container init 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:16 compute-0 podman[94155]: 2026-01-21 13:46:16.163982056 +0000 UTC m=+0.126619388 container start 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:16 compute-0 podman[94155]: 2026-01-21 13:46:16.167350347 +0000 UTC m=+0.129987739 container attach 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 21 13:46:16 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2176148221' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 21 13:46:16 compute-0 gracious_grothendieck[94102]: [client.openstack]
Jan 21 13:46:16 compute-0 gracious_grothendieck[94102]:         key = AQAK2HBpAAAAABAAhSWZ4orU8dfgZu1d3brE9g==
Jan 21 13:46:16 compute-0 gracious_grothendieck[94102]:         caps mgr = "allow *"
Jan 21 13:46:16 compute-0 gracious_grothendieck[94102]:         caps mon = "profile rbd"
Jan 21 13:46:16 compute-0 gracious_grothendieck[94102]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 21 13:46:16 compute-0 systemd[1]: libpod-feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3.scope: Deactivated successfully.
Jan 21 13:46:16 compute-0 podman[94067]: 2026-01-21 13:46:16.320083564 +0000 UTC m=+0.747849583 container died feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-657857baff19e45638555ec52ccb121b04459bb9d853262e165174a2c5ada54f-merged.mount: Deactivated successfully.
Jan 21 13:46:16 compute-0 podman[94067]: 2026-01-21 13:46:16.406313366 +0000 UTC m=+0.834079355 container remove feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3 (image=quay.io/ceph/ceph:v20, name=gracious_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:16 compute-0 systemd[1]: libpod-conmon-feb0e1aeb3142026697d81fe15df58dab37e3cd88b96dd091df53b1fc5e12ee3.scope: Deactivated successfully.
Jan 21 13:46:16 compute-0 sudo[94062]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:16 compute-0 lvm[94260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:46:16 compute-0 lvm[94260]: VG ceph_vg0 finished
Jan 21 13:46:16 compute-0 lvm[94263]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:46:16 compute-0 lvm[94263]: VG ceph_vg1 finished
Jan 21 13:46:16 compute-0 lvm[94265]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:46:16 compute-0 lvm[94265]: VG ceph_vg2 finished
Jan 21 13:46:16 compute-0 lvm[94267]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:46:16 compute-0 lvm[94267]: VG ceph_vg1 finished
Jan 21 13:46:16 compute-0 lvm[94266]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:46:16 compute-0 lvm[94266]: VG ceph_vg0 finished
Jan 21 13:46:16 compute-0 fervent_goldstine[94172]: {}
Jan 21 13:46:16 compute-0 ceph-mon[75031]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:16 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2176148221' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 21 13:46:16 compute-0 systemd[1]: libpod-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope: Deactivated successfully.
Jan 21 13:46:16 compute-0 podman[94155]: 2026-01-21 13:46:16.962168603 +0000 UTC m=+0.924805935 container died 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:16 compute-0 systemd[1]: libpod-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope: Consumed 1.259s CPU time.
Jan 21 13:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebb296f86d45de336e04a11bea6e0421906130a37ee2de7c6debe126f9e0ac16-merged.mount: Deactivated successfully.
Jan 21 13:46:17 compute-0 podman[94155]: 2026-01-21 13:46:17.004777021 +0000 UTC m=+0.967414363 container remove 8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:46:17 compute-0 systemd[1]: libpod-conmon-8f17aa468ce24f41459d53cb56eaa9c57ef4cd943fe6c42cc922d56fb526f556.scope: Deactivated successfully.
Jan 21 13:46:17 compute-0 sudo[94016]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:17 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 810b50ce-cedb-4e21-ae94-a106b4334385 (Updating rgw.rgw deployment (+1 -> 1))
Jan 21 13:46:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 21 13:46:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 21 13:46:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 13:46:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 21 13:46:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:17 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.xeytxr on compute-0
Jan 21 13:46:17 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.xeytxr on compute-0
Jan 21 13:46:17 compute-0 sudo[94281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:17 compute-0 sudo[94281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:17 compute-0 sudo[94281]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:17 compute-0 sudo[94306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:46:17 compute-0 sudo[94306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.592001697 +0000 UTC m=+0.042579140 container create 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:17 compute-0 systemd[1]: Started libpod-conmon-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope.
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.572027124 +0000 UTC m=+0.022604607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.689133641 +0000 UTC m=+0.139711124 container init 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.698679022 +0000 UTC m=+0.149256465 container start 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.702882533 +0000 UTC m=+0.153460026 container attach 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 13:46:17 compute-0 upbeat_black[94461]: 167 167
Jan 21 13:46:17 compute-0 systemd[1]: libpod-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope: Deactivated successfully.
Jan 21 13:46:17 compute-0 conmon[94461]: conmon 3cd82e5d48a51cadd61a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope/container/memory.events
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.707105795 +0000 UTC m=+0.157683248 container died 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-512e993b3557da152ba8bc8a693b59551510d175bf7953e6a3940d13c887afa5-merged.mount: Deactivated successfully.
Jan 21 13:46:17 compute-0 podman[94394]: 2026-01-21 13:46:17.759850138 +0000 UTC m=+0.210427581 container remove 3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:17 compute-0 systemd[1]: libpod-conmon-3cd82e5d48a51cadd61a131e3076ad4421a7fbc9ac5ad18c351540d153a6ffed.scope: Deactivated successfully.
Jan 21 13:46:17 compute-0 systemd[1]: Reloading.
Jan 21 13:46:17 compute-0 systemd-rc-local-generator[94549]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:46:17 compute-0 systemd-sysv-generator[94554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:46:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 21 13:46:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xeytxr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 21 13:46:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:18 compute-0 ceph-mon[75031]: Deploying daemon rgw.rgw.compute-0.xeytxr on compute-0
Jan 21 13:46:18 compute-0 systemd[1]: Reloading.
Jan 21 13:46:18 compute-0 sudo[94588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpjtvjudozkfxifrlrzfutnjayqagel ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003177.5259786-36777-205437609874121/async_wrapper.py j61921864083 30 /home/zuul/.ansible/tmp/ansible-tmp-1769003177.5259786-36777-205437609874121/AnsiballZ_command.py _'
Jan 21 13:46:18 compute-0 systemd-rc-local-generator[94613]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:46:18 compute-0 systemd-sysv-generator[94619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:46:18 compute-0 sudo[94588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:18 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.xeytxr for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:46:18 compute-0 ansible-async_wrapper.py[94627]: Invoked with j61921864083 30 /home/zuul/.ansible/tmp/ansible-tmp-1769003177.5259786-36777-205437609874121/AnsiballZ_command.py _
Jan 21 13:46:18 compute-0 ansible-async_wrapper.py[94657]: Starting module and watcher
Jan 21 13:46:18 compute-0 ansible-async_wrapper.py[94657]: Start watching 94658 (30)
Jan 21 13:46:18 compute-0 ansible-async_wrapper.py[94658]: Start module (94658)
Jan 21 13:46:18 compute-0 ansible-async_wrapper.py[94627]: Return async_wrapper task started.
Jan 21 13:46:18 compute-0 sudo[94588]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:18 compute-0 podman[94682]: 2026-01-21 13:46:18.660450048 +0000 UTC m=+0.058735809 container create d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-rgw-rgw-compute-0-xeytxr, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:18 compute-0 python3[94664]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e690b92cd67a3a9c590da30dab4048f43cea21933c6e5b0d2174a9f4e9e70ec/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.xeytxr supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:18 compute-0 podman[94682]: 2026-01-21 13:46:18.721526131 +0000 UTC m=+0.119811932 container init d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-rgw-rgw-compute-0-xeytxr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:46:18 compute-0 podman[94682]: 2026-01-21 13:46:18.633614079 +0000 UTC m=+0.031899840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:18 compute-0 podman[94694]: 2026-01-21 13:46:18.726674756 +0000 UTC m=+0.046801931 container create 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:46:18 compute-0 podman[94682]: 2026-01-21 13:46:18.731362929 +0000 UTC m=+0.129648690 container start d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-rgw-rgw-compute-0-xeytxr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 13:46:18 compute-0 bash[94682]: d95768cf4dac1ef056ea8ada597c056fa231007c2f44315123caa31cea263ec8
Jan 21 13:46:18 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.xeytxr for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:46:18 compute-0 systemd[1]: Started libpod-conmon-40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab.scope.
Jan 21 13:46:18 compute-0 radosgw[94709]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:46:18 compute-0 radosgw[94709]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 21 13:46:18 compute-0 radosgw[94709]: framework: beast
Jan 21 13:46:18 compute-0 radosgw[94709]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 21 13:46:18 compute-0 radosgw[94709]: init_numa not setting numa affinity
Jan 21 13:46:18 compute-0 sudo[94306]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 podman[94694]: 2026-01-21 13:46:18.70696771 +0000 UTC m=+0.027094905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba673742d9ae554e2a26cb840ebc314f03b924ab136c66585b4c7234c066537f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba673742d9ae554e2a26cb840ebc314f03b924ab136c66585b4c7234c066537f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 13:46:18 compute-0 podman[94694]: 2026-01-21 13:46:18.824061566 +0000 UTC m=+0.144188781 container init 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 810b50ce-cedb-4e21-ae94-a106b4334385 (Updating rgw.rgw deployment (+1 -> 1))
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 810b50ce-cedb-4e21-ae94-a106b4334385 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 13:46:18 compute-0 podman[94694]: 2026-01-21 13:46:18.830396919 +0000 UTC m=+0.150524104 container start 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 21 13:46:18 compute-0 podman[94694]: 2026-01-21 13:46:18.834246553 +0000 UTC m=+0.154373728 container attach 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 3d7f3518-2244-4e51-b382-2c2a8c5fe4f4 (Updating mds.cephfs deployment (+1 -> 1))
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 13:46:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:18 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ddixwa on compute-0
Jan 21 13:46:18 compute-0 ceph-mgr[75322]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ddixwa on compute-0
Jan 21 13:46:18 compute-0 sudo[94747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:18 compute-0 sudo[94747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:18 compute-0 sudo[94747]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:18 compute-0 sudo[94774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
Jan 21 13:46:18 compute-0 sudo[94774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:19 compute-0 ceph-mon[75031]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ddixwa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 21 13:46:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:19 compute-0 jovial_cartwright[94717]: 
Jan 21 13:46:19 compute-0 jovial_cartwright[94717]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 13:46:19 compute-0 systemd[1]: libpod-40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab.scope: Deactivated successfully.
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.337952081 +0000 UTC m=+0.039606237 container create 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:46:19 compute-0 podman[94868]: 2026-01-21 13:46:19.360984737 +0000 UTC m=+0.028322854 container died 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 13:46:19 compute-0 systemd[1]: Started libpod-conmon-925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63.scope.
Jan 21 13:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba673742d9ae554e2a26cb840ebc314f03b924ab136c66585b4c7234c066537f-merged.mount: Deactivated successfully.
Jan 21 13:46:19 compute-0 podman[94868]: 2026-01-21 13:46:19.411504226 +0000 UTC m=+0.078842343 container remove 40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab (image=quay.io/ceph/ceph:v20, name=jovial_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 13:46:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:19 compute-0 systemd[1]: libpod-conmon-40db4d7dbf6b553e43bf3fe628fd79c1d4d70bfd5e44c0a6e73d44a87e7ab3ab.scope: Deactivated successfully.
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.320409748 +0000 UTC m=+0.022063924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.427988495 +0000 UTC m=+0.129642671 container init 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 13:46:19 compute-0 ansible-async_wrapper.py[94658]: Module complete (94658)
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.434637885 +0000 UTC m=+0.136292041 container start 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:46:19 compute-0 focused_mahavira[94889]: 167 167
Jan 21 13:46:19 compute-0 systemd[1]: libpod-925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63.scope: Deactivated successfully.
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.438236462 +0000 UTC m=+0.139890618 container attach 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.438780395 +0000 UTC m=+0.140434551 container died 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e56b84b837c57d8006887fcf3538b0402e126d950dfc74e2a0aa5a3cf5008b50-merged.mount: Deactivated successfully.
Jan 21 13:46:19 compute-0 podman[94855]: 2026-01-21 13:46:19.476855414 +0000 UTC m=+0.178509570 container remove 925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:19 compute-0 systemd[1]: libpod-conmon-925f1e0a5c6b5d731e540223d3b70eba8d2dfe60ef64fc1aed291b5ff318ad63.scope: Deactivated successfully.
Jan 21 13:46:19 compute-0 systemd[1]: Reloading.
Jan 21 13:46:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:19 compute-0 systemd-rc-local-generator[94937]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:46:19 compute-0 systemd-sysv-generator[94941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:46:19 compute-0 sudo[94988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwkjhboeqknkdgcqtjipziieasnskdsu ; /usr/bin/python3'
Jan 21 13:46:19 compute-0 sudo[94988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 21 13:46:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 21 13:46:19 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 21 13:46:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 21 13:46:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 21 13:46:19 compute-0 systemd[1]: Reloading.
Jan 21 13:46:19 compute-0 systemd-rc-local-generator[95024]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:46:19 compute-0 systemd-sysv-generator[95028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:46:19 compute-0 python3[94992]: ansible-ansible.legacy.async_status Invoked with jid=j61921864083.94627 mode=status _async_dir=/root/.ansible_async
Jan 21 13:46:19 compute-0 sudo[94988]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:20 compute-0 sudo[95077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqoegworcjdcymzfqdxoallyjlsixant ; /usr/bin/python3'
Jan 21 13:46:20 compute-0 sudo[95077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:20 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ddixwa for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a...
Jan 21 13:46:20 compute-0 python3[95081]: ansible-ansible.legacy.async_status Invoked with jid=j61921864083.94627 mode=cleanup _async_dir=/root/.ansible_async
Jan 21 13:46:20 compute-0 sudo[95077]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:20 compute-0 podman[95128]: 2026-01-21 13:46:20.333142103 +0000 UTC m=+0.020842664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:20 compute-0 sudo[95164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbawawxtqhhlnknjczsqqyfeagztbsoa ; /usr/bin/python3'
Jan 21 13:46:20 compute-0 sudo[95164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:20 compute-0 python3[95166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:20 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 4 completed events
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: Saving service rgw.rgw spec with placement compute-0
Jan 21 13:46:21 compute-0 ceph-mon[75031]: Deploying daemon mds.cephfs.compute-0.ddixwa on compute-0
Jan 21 13:46:21 compute-0 ceph-mon[75031]: from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:21 compute-0 ceph-mon[75031]: osdmap e34: 3 total, 3 up, 3 in
Jan 21 13:46:21 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 21 13:46:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v76: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:21 compute-0 podman[95128]: 2026-01-21 13:46:21.533783125 +0000 UTC m=+1.221483676 container create 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 21 13:46:21 compute-0 ceph-mgr[75322]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 21 13:46:21 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/285da33500b33a392a5323e6f07db112a33bd592cfe956b23ecf335e4b1e4931/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ddixwa supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:21 compute-0 podman[95167]: 2026-01-21 13:46:21.629990398 +0000 UTC m=+0.694268850 container create d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:21 compute-0 podman[95128]: 2026-01-21 13:46:21.635763797 +0000 UTC m=+1.323464368 container init 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:46:21 compute-0 podman[95128]: 2026-01-21 13:46:21.643313279 +0000 UTC m=+1.331013820 container start 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 21 13:46:21 compute-0 bash[95128]: 380cea61fdd3e7c3770a41073ef15e8e1016252df6e767dd7931a2e6c1d30007
Jan 21 13:46:21 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ddixwa for 2f0e9cad-f0a3-5869-9cc3-8d84d071866a.
Jan 21 13:46:21 compute-0 systemd[1]: Started libpod-conmon-d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c.scope.
Jan 21 13:46:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d668f61259387b4194989b9bd4467d59e771a6c135b43c1147adfd6bbe768345/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d668f61259387b4194989b9bd4467d59e771a6c135b43c1147adfd6bbe768345/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:21 compute-0 ceph-mds[95704]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:46:21 compute-0 ceph-mds[95704]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 21 13:46:21 compute-0 ceph-mds[95704]: main not setting numa affinity
Jan 21 13:46:21 compute-0 podman[95167]: 2026-01-21 13:46:21.603342663 +0000 UTC m=+0.667621155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:21 compute-0 ceph-mds[95704]: pidfile_write: ignore empty --pid-file
Jan 21 13:46:21 compute-0 sudo[94774]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa[95223]: starting mds.cephfs.compute-0.ddixwa at 
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:21 compute-0 podman[95167]: 2026-01-21 13:46:21.713219096 +0000 UTC m=+0.777497598 container init d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:21 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 2 from mon.0
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:21 compute-0 podman[95167]: 2026-01-21 13:46:21.720674766 +0000 UTC m=+0.784953248 container start d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:21 compute-0 podman[95167]: 2026-01-21 13:46:21.727995223 +0000 UTC m=+0.792273715 container attach d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:21 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 3d7f3518-2244-4e51-b382-2c2a8c5fe4f4 (Updating mds.cephfs deployment (+1 -> 1))
Jan 21 13:46:21 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 3d7f3518-2244-4e51-b382-2c2a8c5fe4f4 (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 21 13:46:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:21 compute-0 sudo[95774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:46:21 compute-0 sudo[95774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:21 compute-0 sudo[95774]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:21 compute-0 sudo[95818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:21 compute-0 sudo[95818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:21 compute-0 sudo[95818]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:21 compute-0 sudo[95843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:46:21 compute-0 sudo[95843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:22 compute-0 affectionate_carson[95752]: 
Jan 21 13:46:22 compute-0 affectionate_carson[95752]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 21 13:46:22 compute-0 systemd[1]: libpod-d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c.scope: Deactivated successfully.
Jan 21 13:46:22 compute-0 podman[95167]: 2026-01-21 13:46:22.173470866 +0000 UTC m=+1.237749328 container died d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d668f61259387b4194989b9bd4467d59e771a6c135b43c1147adfd6bbe768345-merged.mount: Deactivated successfully.
Jan 21 13:46:22 compute-0 podman[95167]: 2026-01-21 13:46:22.230604425 +0000 UTC m=+1.294882887 container remove d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c (image=quay.io/ceph/ceph:v20, name=affectionate_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:22 compute-0 sudo[95164]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:22 compute-0 systemd[1]: libpod-conmon-d5c688af19aa932a90d0b5078d3d5955841b1735b7c8807660d65bc8c2b2711c.scope: Deactivated successfully.
Jan 21 13:46:22 compute-0 podman[95925]: 2026-01-21 13:46:22.37412162 +0000 UTC m=+0.054997829 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:22 compute-0 podman[95925]: 2026-01-21 13:46:22.481674356 +0000 UTC m=+0.162550545 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:46:22 compute-0 ceph-mon[75031]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:22 compute-0 ceph-mon[75031]: pgmap v76: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/854169589' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:22 compute-0 ceph-mon[75031]: osdmap e35: 3 total, 3 up, 3 in
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:22 compute-0 ceph-mon[75031]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 21 13:46:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 new map
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-21T13:46:22:714883+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T13:46:07.955594+0000
                                           modified        2026-01-21T13:46:07.955594+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ddixwa{-1:14256} state up:standby seq 1 addr [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] compat {c=[1],r=[1],i=[1fff]}]
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 3 from mon.0
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Monitors have assigned me to become a standby
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] up:boot
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] as mds.0
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ddixwa assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ddixwa"} v 0)
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.ddixwa"} : dispatch
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e3 all = 0
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e4 new map
Jan 21 13:46:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-21T13:46:22:721347+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T13:46:07.955594+0000
                                           modified        2026-01-21T13:46:22.721339+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14256}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.ddixwa{0:14256} state up:creating seq 1 addr [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ddixwa=up:creating}
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 4 from mon.0
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x1
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x100
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x600
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x601
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x602
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x603
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x604
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x605
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x606
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x607
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x608
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.cache creating system inode with ino:0x609
Jan 21 13:46:22 compute-0 ceph-mds[95704]: mds.0.4 creating_done
Jan 21 13:46:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ddixwa is now active in filesystem cephfs as rank 0
Jan 21 13:46:22 compute-0 sudo[96097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewyzafecpkcyzviawxlyltuuqrkeauyp ; /usr/bin/python3'
Jan 21 13:46:22 compute-0 sudo[96097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:23 compute-0 python3[96109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.145921009 +0000 UTC m=+0.041806120 container create 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:23 compute-0 sudo[95843]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:23 compute-0 systemd[1]: Started libpod-conmon-4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167.scope.
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c7c5aea0d958d2ca7050a2d1d79dac63eeec66d27ec7e3a82d7f933093c77a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c7c5aea0d958d2ca7050a2d1d79dac63eeec66d27ec7e3a82d7f933093c77a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.129315738 +0000 UTC m=+0.025200859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.229096818 +0000 UTC m=+0.124981939 container init 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.234808486 +0000 UTC m=+0.130693597 container start 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.238948486 +0000 UTC m=+0.134833617 container attach 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 13:46:23 compute-0 sudo[96165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:23 compute-0 sudo[96165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:23 compute-0 sudo[96165]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:23 compute-0 sudo[96191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:46:23 compute-0 sudo[96191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:23 compute-0 ansible-async_wrapper.py[94657]: Done in kid B.
Jan 21 13:46:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v79: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 21 13:46:23 compute-0 ceph-mon[75031]: osdmap e36: 3 total, 3 up, 3 in
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mds.? [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] up:boot
Jan 21 13:46:23 compute-0 ceph-mon[75031]: daemon mds.cephfs.compute-0.ddixwa assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: Cluster is now healthy
Jan 21 13:46:23 compute-0 ceph-mon[75031]: fsmap cephfs:0 1 up:standby
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.ddixwa"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: fsmap cephfs:1 {0=cephfs.compute-0.ddixwa=up:creating}
Jan 21 13:46:23 compute-0 ceph-mon[75031]: daemon mds.cephfs.compute-0.ddixwa is now active in filesystem cephfs as rank 0
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:23 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.610510664 +0000 UTC m=+0.045711955 container create 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 13:46:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 13:46:23 compute-0 suspicious_shannon[96162]: 
Jan 21 13:46:23 compute-0 suspicious_shannon[96162]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 21 13:46:23 compute-0 systemd[1]: Started libpod-conmon-7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e.scope.
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.667686624 +0000 UTC m=+0.563571755 container died 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:23 compute-0 systemd[1]: libpod-4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167.scope: Deactivated successfully.
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.587206751 +0000 UTC m=+0.022408072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-80c7c5aea0d958d2ca7050a2d1d79dac63eeec66d27ec7e3a82d7f933093c77a-merged.mount: Deactivated successfully.
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.692805511 +0000 UTC m=+0.128006822 container init 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.698171051 +0000 UTC m=+0.133372342 container start 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:23 compute-0 recursing_mccarthy[96265]: 167 167
Jan 21 13:46:23 compute-0 podman[96137]: 2026-01-21 13:46:23.709536515 +0000 UTC m=+0.605421626 container remove 4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167 (image=quay.io/ceph/ceph:v20, name=suspicious_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:23 compute-0 systemd[1]: libpod-7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e.scope: Deactivated successfully.
Jan 21 13:46:23 compute-0 systemd[1]: libpod-conmon-4f79c489958b29c87d7cd1eaa6c3fe4ca721783458e1925ad13b87099d267167.scope: Deactivated successfully.
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.720679354 +0000 UTC m=+0.155880645 container attach 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.721325319 +0000 UTC m=+0.156526610 container died 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:23 compute-0 sudo[96097]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e5 new map
Jan 21 13:46:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-21T13:46:23:724742+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-21T13:46:07.955594+0000
                                           modified        2026-01-21T13:46:23.724739+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14256}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14256 members: 14256
                                           [mds.cephfs.compute-0.ddixwa{0:14256} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 21 13:46:23 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa Updating MDS map to version 5 from mon.0
Jan 21 13:46:23 compute-0 ceph-mds[95704]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 21 13:46:23 compute-0 ceph-mds[95704]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 21 13:46:23 compute-0 ceph-mds[95704]: mds.0.4 recovery_done -- successful recovery!
Jan 21 13:46:23 compute-0 ceph-mds[95704]: mds.0.4 active_start
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] up:active
Jan 21 13:46:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ddixwa=up:active}
Jan 21 13:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-606414b85feb04834ecdd8deae7baff716b9e020c50d4712b319a895bd1eb5e6-merged.mount: Deactivated successfully.
Jan 21 13:46:23 compute-0 podman[96245]: 2026-01-21 13:46:23.760333841 +0000 UTC m=+0.195535132 container remove 7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mccarthy, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:46:23 compute-0 systemd[1]: libpod-conmon-7ad2e9cebd64e9ac565fb6fd54bb4573f665d420a6794a14bbf3fc03feb5b07e.scope: Deactivated successfully.
Jan 21 13:46:23 compute-0 podman[96304]: 2026-01-21 13:46:23.911758266 +0000 UTC m=+0.035444477 container create ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 13:46:23 compute-0 systemd[1]: Started libpod-conmon-ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8.scope.
Jan 21 13:46:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:23 compute-0 podman[96304]: 2026-01-21 13:46:23.979368467 +0000 UTC m=+0.103054698 container init ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:23 compute-0 podman[96304]: 2026-01-21 13:46:23.989256217 +0000 UTC m=+0.112942438 container start ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:23 compute-0 podman[96304]: 2026-01-21 13:46:23.896913008 +0000 UTC m=+0.020599229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:23 compute-0 podman[96304]: 2026-01-21 13:46:23.993864538 +0000 UTC m=+0.117550779 container attach ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:46:24 compute-0 lucid_swanson[96320]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:46:24 compute-0 lucid_swanson[96320]: --> All data devices are unavailable
Jan 21 13:46:24 compute-0 systemd[1]: libpod-ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8.scope: Deactivated successfully.
Jan 21 13:46:24 compute-0 podman[96304]: 2026-01-21 13:46:24.530254096 +0000 UTC m=+0.653940357 container died ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2715cf70d3ec6e7c87894d353df11ffde22ebd8c244d3570eee111e0e4dae966-merged.mount: Deactivated successfully.
Jan 21 13:46:24 compute-0 podman[96304]: 2026-01-21 13:46:24.593893071 +0000 UTC m=+0.717579302 container remove ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:46:24 compute-0 systemd[1]: libpod-conmon-ea05c0d14dddcde2d1da73f158d8016512c04ea5e32afc918871fb45ee3aaca8.scope: Deactivated successfully.
Jan 21 13:46:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 21 13:46:24 compute-0 ceph-mon[75031]: pgmap v79: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 21 13:46:24 compute-0 ceph-mon[75031]: osdmap e37: 3 total, 3 up, 3 in
Jan 21 13:46:24 compute-0 ceph-mon[75031]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 13:46:24 compute-0 ceph-mon[75031]: mds.? [v2:192.168.122.100:6814/3706750080,v1:192.168.122.100:6815/3706750080] up:active
Jan 21 13:46:24 compute-0 ceph-mon[75031]: fsmap cephfs:1 {0=cephfs.compute-0.ddixwa=up:active}
Jan 21 13:46:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 21 13:46:24 compute-0 sudo[96375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpvebxtstypfbzuvfknfkxhkicnjhcr ; /usr/bin/python3'
Jan 21 13:46:24 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 21 13:46:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 21 13:46:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 21 13:46:24 compute-0 sudo[96375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:24 compute-0 sudo[96191]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:24 compute-0 sudo[96378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:24 compute-0 sudo[96378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:24 compute-0 sudo[96378]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:24 compute-0 sudo[96403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:46:24 compute-0 sudo[96403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:24 compute-0 python3[96377]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:24 compute-0 podman[96428]: 2026-01-21 13:46:24.843082966 +0000 UTC m=+0.047037365 container create 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:46:24 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:24 compute-0 systemd[1]: Started libpod-conmon-3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9.scope.
Jan 21 13:46:24 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:24 compute-0 podman[96428]: 2026-01-21 13:46:24.827187673 +0000 UTC m=+0.031142082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fd812f315d4a37495b3473e75ec73b68dd7bd4038ad5198361753b9fa22602/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3fd812f315d4a37495b3473e75ec73b68dd7bd4038ad5198361753b9fa22602/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:24 compute-0 podman[96428]: 2026-01-21 13:46:24.931627784 +0000 UTC m=+0.135582223 container init 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 13:46:24 compute-0 podman[96428]: 2026-01-21 13:46:24.945204391 +0000 UTC m=+0.149158800 container start 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:46:24 compute-0 podman[96428]: 2026-01-21 13:46:24.948794459 +0000 UTC m=+0.152748858 container attach 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:46:25 compute-0 podman[96460]: 2026-01-21 13:46:25.031643788 +0000 UTC m=+0.060459630 container create ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 13:46:25 compute-0 systemd[1]: Started libpod-conmon-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope.
Jan 21 13:46:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:25 compute-0 podman[96460]: 2026-01-21 13:46:25.103451441 +0000 UTC m=+0.132267293 container init ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:25 compute-0 podman[96460]: 2026-01-21 13:46:25.010617151 +0000 UTC m=+0.039433033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:25 compute-0 podman[96460]: 2026-01-21 13:46:25.108266148 +0000 UTC m=+0.137081980 container start ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:25 compute-0 podman[96460]: 2026-01-21 13:46:25.110975964 +0000 UTC m=+0.139792036 container attach ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 13:46:25 compute-0 intelligent_sutherland[96493]: 167 167
Jan 21 13:46:25 compute-0 systemd[1]: libpod-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope: Deactivated successfully.
Jan 21 13:46:25 compute-0 conmon[96493]: conmon ccceef272ecbcfd8a367 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope/container/memory.events
Jan 21 13:46:25 compute-0 podman[96500]: 2026-01-21 13:46:25.149478093 +0000 UTC m=+0.022143666 container died ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f1cc892b71f23caf956e9f54db7aa928f1c6e1efd4ee8eb76c738a813ebdd9-merged.mount: Deactivated successfully.
Jan 21 13:46:25 compute-0 podman[96500]: 2026-01-21 13:46:25.189764285 +0000 UTC m=+0.062429908 container remove ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sutherland, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:25 compute-0 systemd[1]: libpod-conmon-ccceef272ecbcfd8a36749a6b57db0db6a681bdb222bd8bb62829c02c29bad40.scope: Deactivated successfully.
Jan 21 13:46:25 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:25 compute-0 pensive_galois[96443]: 
Jan 21 13:46:25 compute-0 pensive_galois[96443]: [{"container_id": "52571d403aea", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.20%", "created": "2026-01-21T13:45:02.316168Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-21T13:45:02.372493Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174177Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-21T13:45:02.184390Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@crash.compute-0", "version": "20.2.0"}, {"container_id": "380cea61fdd3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.84%", "created": "2026-01-21T13:46:21.655932Z", "daemon_id": "cephfs.compute-0.ddixwa", "daemon_name": "mds.cephfs.compute-0.ddixwa", "daemon_type": "mds", "events": ["2026-01-21T13:46:21.731740Z daemon:mds.cephfs.compute-0.ddixwa [INFO] \"Deployed mds.cephfs.compute-0.ddixwa on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174849Z", "memory_usage": 12582912, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-21T13:46:20.337242Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mds.cephfs.compute-0.ddixwa", "version": "20.2.0"}, {"container_id": "e43620387fac", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.15%", "created": "2026-01-21T13:44:22.810508Z", "daemon_id": "compute-0.tnwklj", "daemon_name": "mgr.compute-0.tnwklj", "daemon_type": "mgr", "events": ["2026-01-21T13:45:06.651074Z daemon:mgr.compute-0.tnwklj [INFO] \"Reconfigured mgr.compute-0.tnwklj on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174103Z", "memory_usage": 549768396, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-21T13:44:22.699661Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mgr.compute-0.tnwklj", "version": "20.2.0"}, {"container_id": "cfe4b6f08f6d", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.88%", "created": "2026-01-21T13:44:18.779414Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-21T13:45:05.928055Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.173996Z", "memory_request": 2147483648, "memory_usage": 45382369, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-21T13:44:20.894393Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@mon.compute-0", "version": "20.2.0"}, {"container_id": "534fa4fe4148", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.65%", "created": "2026-01-21T13:45:27.274662Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-21T13:45:27.343889Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174262Z", "memory_request": 4294967296, "memory_usage": 56476303, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T13:45:27.152712Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@osd.0", "version": "20.2.0"}, {"container_id": "75f58788bd5e", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.80%", "created": "2026-01-21T13:45:31.873769Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-21T13:45:32.072850Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174526Z", "memory_request": 4294967296, "memory_usage": 58625884, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T13:45:31.716084Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@osd.1", "version": "20.2.0"}, {"container_id": "391c65d49d06", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.94%", "created": "2026-01-21T13:45:36.373873Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-21T13:45:36.548018Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174617Z", "memory_request": 4294967296, "memory_usage": 56916705, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-21T13:45:36.206432Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@osd.2", "version": "20.2.0"}, {"container_id": "d95768cf4dac", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.70%", "created": "2026-01-21T13:46:18.748864Z", "daemon_id": "rgw.compute-0.xeytxr", "daemon_name": "rgw.rgw.compute-0.xeytxr", "daemon_type": "rgw", "events": ["2026-01-21T13:46:18.819074Z daemon:rgw.rgw.compute-0.xeytxr [INFO] \"Deployed rgw.rgw.compute-0.xeytxr on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-01-21T13:46:23.174739Z", "memory_usage": 56654561, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-21T13:46:18.641851Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a@rgw.rgw.compute-0.xeytxr", "version": "20.2.0"}]
Jan 21 13:46:25 compute-0 systemd[1]: libpod-3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9.scope: Deactivated successfully.
Jan 21 13:46:25 compute-0 podman[96428]: 2026-01-21 13:46:25.384179828 +0000 UTC m=+0.588134227 container died 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.39957744 +0000 UTC m=+0.054338713 container create 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:25 compute-0 podman[96428]: 2026-01-21 13:46:25.437582347 +0000 UTC m=+0.641536746 container remove 3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9 (image=quay.io/ceph/ceph:v20, name=pensive_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:46:25 compute-0 systemd[1]: Started libpod-conmon-7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7.scope.
Jan 21 13:46:25 compute-0 systemd[1]: libpod-conmon-3ff07e57fa4f92123adf4cf03ed4827d1bac2a846dc59fa67c8c2c62993df1b9.scope: Deactivated successfully.
Jan 21 13:46:25 compute-0 sudo[96375]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.369975286 +0000 UTC m=+0.024736549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.503485538 +0000 UTC m=+0.158246811 container init 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.511162243 +0000 UTC m=+0.165923536 container start 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.51519548 +0000 UTC m=+0.169956763 container attach 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 13:46:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v82: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 21 13:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3fd812f315d4a37495b3473e75ec73b68dd7bd4038ad5198361753b9fa22602-merged.mount: Deactivated successfully.
Jan 21 13:46:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 21 13:46:25 compute-0 rsyslogd[1002]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "52571d403aea", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 21 13:46:25 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 13:46:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 21 13:46:25 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 21 13:46:25 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:25 compute-0 ceph-mon[75031]: osdmap e38: 3 total, 3 up, 3 in
Jan 21 13:46:25 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 21 13:46:25 compute-0 ceph-mon[75031]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 21 13:46:25 compute-0 beautiful_black[96551]: {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:     "0": [
Jan 21 13:46:25 compute-0 beautiful_black[96551]:         {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "devices": [
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "/dev/loop3"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             ],
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_name": "ceph_lv0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_size": "21470642176",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "name": "ceph_lv0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "tags": {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.crush_device_class": "",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.encrypted": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osd_id": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.type": "block",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.vdo": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.with_tpm": "0"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             },
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "type": "block",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "vg_name": "ceph_vg0"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:         }
Jan 21 13:46:25 compute-0 beautiful_black[96551]:     ],
Jan 21 13:46:25 compute-0 beautiful_black[96551]:     "1": [
Jan 21 13:46:25 compute-0 beautiful_black[96551]:         {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "devices": [
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "/dev/loop4"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             ],
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_name": "ceph_lv1",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_size": "21470642176",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "name": "ceph_lv1",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "tags": {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.crush_device_class": "",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.encrypted": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osd_id": "1",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.type": "block",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.vdo": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.with_tpm": "0"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             },
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "type": "block",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "vg_name": "ceph_vg1"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:         }
Jan 21 13:46:25 compute-0 beautiful_black[96551]:     ],
Jan 21 13:46:25 compute-0 beautiful_black[96551]:     "2": [
Jan 21 13:46:25 compute-0 beautiful_black[96551]:         {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "devices": [
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "/dev/loop5"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             ],
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_name": "ceph_lv2",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_size": "21470642176",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "name": "ceph_lv2",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "tags": {
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.crush_device_class": "",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.encrypted": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osd_id": "2",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.type": "block",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.vdo": "0",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:                 "ceph.with_tpm": "0"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             },
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "type": "block",
Jan 21 13:46:25 compute-0 beautiful_black[96551]:             "vg_name": "ceph_vg2"
Jan 21 13:46:25 compute-0 beautiful_black[96551]:         }
Jan 21 13:46:25 compute-0 beautiful_black[96551]:     ]
Jan 21 13:46:25 compute-0 beautiful_black[96551]: }
Jan 21 13:46:25 compute-0 systemd[1]: libpod-7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7.scope: Deactivated successfully.
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.849767726 +0000 UTC m=+0.504528989 container died 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-89cab26400370a1fcffb35b867943086380b28e3c5d226fe951be5b7e0e0ba56-merged.mount: Deactivated successfully.
Jan 21 13:46:25 compute-0 podman[96522]: 2026-01-21 13:46:25.891828312 +0000 UTC m=+0.546589565 container remove 7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:46:25 compute-0 systemd[1]: libpod-conmon-7d3d02c3b57659b1e64314edd098eb990b1ad6505102b4958579e94c9f70d4f7.scope: Deactivated successfully.
Jan 21 13:46:25 compute-0 sudo[96403]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:25 compute-0 sudo[96573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:25 compute-0 sudo[96573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:25 compute-0 sudo[96573]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:26 compute-0 sudo[96598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:46:26 compute-0 sudo[96598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:26 compute-0 sudo[96648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yophywacyujvyvxqkycmsebynzptmmwm ; /usr/bin/python3'
Jan 21 13:46:26 compute-0 sudo[96648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.405781868 +0000 UTC m=+0.062767646 container create 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 13:46:26 compute-0 systemd[1]: Started libpod-conmon-83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f.scope.
Jan 21 13:46:26 compute-0 python3[96660]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.383674815 +0000 UTC m=+0.040660623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.498521197 +0000 UTC m=+0.155507065 container init 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.507524444 +0000 UTC m=+0.164510252 container start 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:26 compute-0 nostalgic_panini[96677]: 167 167
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.514091593 +0000 UTC m=+0.171077401 container attach 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:26 compute-0 systemd[1]: libpod-83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f.scope: Deactivated successfully.
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.515983448 +0000 UTC m=+0.172969256 container died 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 13:46:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:26 compute-0 podman[96679]: 2026-01-21 13:46:26.546789512 +0000 UTC m=+0.068698070 container create 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8887e8413acb4a6e7fcf58467c4a11082edc8427dca70f240b2788f13b1037b-merged.mount: Deactivated successfully.
Jan 21 13:46:26 compute-0 podman[96661]: 2026-01-21 13:46:26.577828881 +0000 UTC m=+0.234814689 container remove 83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:26 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 5 completed events
Jan 21 13:46:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:46:26 compute-0 systemd[1]: Started libpod-conmon-2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4.scope.
Jan 21 13:46:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:26 compute-0 systemd[1]: libpod-conmon-83ec7b30ccff134dde1170018933e9da19388e947a7f12fead09db97b189b68f.scope: Deactivated successfully.
Jan 21 13:46:26 compute-0 podman[96679]: 2026-01-21 13:46:26.510118167 +0000 UTC m=+0.032026765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51465ba9c61af92a96b5967ff3705951332c35a79edae95ab4303a2ec1c5b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c51465ba9c61af92a96b5967ff3705951332c35a79edae95ab4303a2ec1c5b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 21 13:46:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 21 13:46:26 compute-0 podman[96679]: 2026-01-21 13:46:26.651206543 +0000 UTC m=+0.173115051 container init 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:46:26 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 21 13:46:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 21 13:46:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 21 13:46:26 compute-0 ceph-mon[75031]: pgmap v82: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 21 13:46:26 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 21 13:46:26 compute-0 ceph-mon[75031]: osdmap e39: 3 total, 3 up, 3 in
Jan 21 13:46:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:26 compute-0 podman[96679]: 2026-01-21 13:46:26.657662268 +0000 UTC m=+0.179570776 container start 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:26 compute-0 podman[96679]: 2026-01-21 13:46:26.660821715 +0000 UTC m=+0.182730223 container attach 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:26 compute-0 podman[96720]: 2026-01-21 13:46:26.749236809 +0000 UTC m=+0.053028481 container create abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:26 compute-0 systemd[1]: Started libpod-conmon-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope.
Jan 21 13:46:26 compute-0 podman[96720]: 2026-01-21 13:46:26.721347336 +0000 UTC m=+0.025139078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:26 compute-0 podman[96720]: 2026-01-21 13:46:26.862709148 +0000 UTC m=+0.166500880 container init abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:46:26 compute-0 podman[96720]: 2026-01-21 13:46:26.870963237 +0000 UTC m=+0.174754929 container start abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:46:26 compute-0 podman[96720]: 2026-01-21 13:46:26.874751048 +0000 UTC m=+0.178542790 container attach abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/254387530' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:46:27 compute-0 epic_yalow[96709]: 
Jan 21 13:46:27 compute-0 epic_yalow[96709]: {"fsid":"2f0e9cad-f0a3-5869-9cc3-8d84d071866a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":126,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1769003143,"num_in_osds":3,"osd_in_since":1769003119,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":1}],"num_pgs":10,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":84107264,"bytes_avail":64327819264,"bytes_total":64411926528,"unknown_pgs_ratio":0.10000000149011612,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-01-21T13:46:23:724742+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.ddixwa","status":"up:active","gid":14256}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-21T13:45:41.522372+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"37e12876-a85b-42c9-8ae6-94fa3a820be5":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 21 13:46:27 compute-0 systemd[1]: libpod-2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4.scope: Deactivated successfully.
Jan 21 13:46:27 compute-0 podman[96679]: 2026-01-21 13:46:27.211870046 +0000 UTC m=+0.733778554 container died 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c51465ba9c61af92a96b5967ff3705951332c35a79edae95ab4303a2ec1c5b8-merged.mount: Deactivated successfully.
Jan 21 13:46:27 compute-0 podman[96679]: 2026-01-21 13:46:27.262696832 +0000 UTC m=+0.784605340 container remove 2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4 (image=quay.io/ceph/ceph:v20, name=epic_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 13:46:27 compute-0 systemd[1]: libpod-conmon-2f0e46d8e954c0add3a59d37c51f75d2bb2604dfab4c13325a779405b18f48a4.scope: Deactivated successfully.
Jan 21 13:46:27 compute-0 sudo[96648]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 21 13:46:27 compute-0 lvm[96848]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:46:27 compute-0 lvm[96849]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:46:27 compute-0 lvm[96848]: VG ceph_vg0 finished
Jan 21 13:46:27 compute-0 lvm[96849]: VG ceph_vg1 finished
Jan 21 13:46:27 compute-0 lvm[96851]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:46:27 compute-0 lvm[96851]: VG ceph_vg2 finished
Jan 21 13:46:27 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 21 13:46:27 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:27 compute-0 ceph-mon[75031]: osdmap e40: 3 total, 3 up, 3 in
Jan 21 13:46:27 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 21 13:46:27 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/254387530' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 21 13:46:27 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 21 13:46:27 compute-0 ceph-mon[75031]: osdmap e41: 3 total, 3 up, 3 in
Jan 21 13:46:27 compute-0 trusting_pare[96755]: {}
Jan 21 13:46:27 compute-0 systemd[1]: libpod-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope: Deactivated successfully.
Jan 21 13:46:27 compute-0 systemd[1]: libpod-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope: Consumed 1.337s CPU time.
Jan 21 13:46:27 compute-0 podman[96720]: 2026-01-21 13:46:27.699007925 +0000 UTC m=+1.002799577 container died abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-21ded4f40b00c090eb9d59154f13853f2b20f390ab9976472d961f118ec0b11c-merged.mount: Deactivated successfully.
Jan 21 13:46:27 compute-0 ceph-mds[95704]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 21 13:46:27 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mds-cephfs-compute-0-ddixwa[95223]: 2026-01-21T13:46:27.730+0000 7f4ca20fb640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 21 13:46:27 compute-0 podman[96720]: 2026-01-21 13:46:27.747631889 +0000 UTC m=+1.051423581 container remove abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_pare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:46:27 compute-0 systemd[1]: libpod-conmon-abb8171c4792b2d7d8ec2d3c06133a893acb09d5abfc63af5b4b612433af055f.scope: Deactivated successfully.
Jan 21 13:46:27 compute-0 sudo[96598]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:27 compute-0 sudo[96867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:46:27 compute-0 sudo[96867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:27 compute-0 sudo[96867]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:27 compute-0 sudo[96892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:27 compute-0 sudo[96892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:27 compute-0 sudo[96892]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:28 compute-0 sudo[96917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:46:28 compute-0 sudo[96917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:28 compute-0 sudo[96965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvnmbavfvmcoggecuqosmspofyipruzb ; /usr/bin/python3'
Jan 21 13:46:28 compute-0 sudo[96965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:28 compute-0 python3[96969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:28 compute-0 podman[97000]: 2026-01-21 13:46:28.419173088 +0000 UTC m=+0.039482254 container create d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:28 compute-0 systemd[1]: Started libpod-conmon-d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299.scope.
Jan 21 13:46:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/421e495289fb0e711380a992ce33f0e724cb48daf6b34cc6a19664ba83939791/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/421e495289fb0e711380a992ce33f0e724cb48daf6b34cc6a19664ba83939791/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:28 compute-0 podman[97000]: 2026-01-21 13:46:28.399794311 +0000 UTC m=+0.020103487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:28 compute-0 podman[97025]: 2026-01-21 13:46:28.511944698 +0000 UTC m=+0.064856497 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 13:46:28 compute-0 podman[97000]: 2026-01-21 13:46:28.516029367 +0000 UTC m=+0.136338533 container init d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:28 compute-0 podman[97000]: 2026-01-21 13:46:28.529470161 +0000 UTC m=+0.149779337 container start d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:28 compute-0 podman[97000]: 2026-01-21 13:46:28.533534179 +0000 UTC m=+0.153843345 container attach d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:28 compute-0 podman[97025]: 2026-01-21 13:46:28.627329513 +0000 UTC m=+0.180241352 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 21 13:46:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 13:46:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 21 13:46:28 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 21 13:46:28 compute-0 ceph-mon[75031]: pgmap v85: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 21 13:46:28 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 21 13:46:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:28 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772004302' entity='client.rgw.rgw.compute-0.xeytxr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 21 13:46:28 compute-0 ceph-mon[75031]: osdmap e42: 3 total, 3 up, 3 in
Jan 21 13:46:28 compute-0 radosgw[94709]: v1 topic migration: starting v1 topic migration..
Jan 21 13:46:28 compute-0 radosgw[94709]: v1 topic migration: finished v1 topic migration
Jan 21 13:46:28 compute-0 radosgw[94709]: framework: beast
Jan 21 13:46:28 compute-0 radosgw[94709]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 21 13:46:28 compute-0 radosgw[94709]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 21 13:46:28 compute-0 radosgw[94709]: starting handler: beast
Jan 21 13:46:28 compute-0 radosgw[94709]: set uid:gid to 167:167 (ceph:ceph)
Jan 21 13:46:28 compute-0 radosgw[94709]: mgrc service_daemon_register rgw.14254 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.xeytxr,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f4815f66-3704-4561-98d8-80b5d3621d9a,zone_name=default,zonegroup_id=ce8ca06b-86cb-4011-b9c4-0ea7e0974e31,zonegroup_name=default}
Jan 21 13:46:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 21 13:46:28 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297057373' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:46:28 compute-0 magical_brattain[97034]: 
Jan 21 13:46:29 compute-0 magical_brattain[97034]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.xeytxr","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 21 13:46:29 compute-0 systemd[1]: libpod-d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299.scope: Deactivated successfully.
Jan 21 13:46:29 compute-0 podman[97000]: 2026-01-21 13:46:29.00169956 +0000 UTC m=+0.622008716 container died d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-421e495289fb0e711380a992ce33f0e724cb48daf6b34cc6a19664ba83939791-merged.mount: Deactivated successfully.
Jan 21 13:46:29 compute-0 podman[97000]: 2026-01-21 13:46:29.042181667 +0000 UTC m=+0.662490823 container remove d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299 (image=quay.io/ceph/ceph:v20, name=magical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 13:46:29 compute-0 systemd[1]: libpod-conmon-d4d2d29369b3486133dc1b9abc6528b16a727daa1e15b555f134fe3db53fd299.scope: Deactivated successfully.
Jan 21 13:46:29 compute-0 sudo[96965]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:29 compute-0 sudo[96917]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:46:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:29 compute-0 sudo[97279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:29 compute-0 sudo[97279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:29 compute-0 sudo[97279]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2297057373' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:46:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:46:29 compute-0 sudo[97304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:46:29 compute-0 sudo[97304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:29 compute-0 sudo[97368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkwjgxxvgmfryzgndbdhaqyxkssbeiw ; /usr/bin/python3'
Jan 21 13:46:29 compute-0 sudo[97368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:29 compute-0 podman[97363]: 2026-01-21 13:46:29.993095811 +0000 UTC m=+0.043394678 container create 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:30 compute-0 systemd[1]: Started libpod-conmon-337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c.scope.
Jan 21 13:46:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:30 compute-0 podman[97363]: 2026-01-21 13:46:29.975100127 +0000 UTC m=+0.025398974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:30 compute-0 podman[97363]: 2026-01-21 13:46:30.070834537 +0000 UTC m=+0.121133394 container init 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:30 compute-0 podman[97363]: 2026-01-21 13:46:30.076482444 +0000 UTC m=+0.126781291 container start 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:30 compute-0 podman[97363]: 2026-01-21 13:46:30.079611189 +0000 UTC m=+0.129910046 container attach 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:30 compute-0 exciting_liskov[97386]: 167 167
Jan 21 13:46:30 compute-0 systemd[1]: libpod-337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c.scope: Deactivated successfully.
Jan 21 13:46:30 compute-0 podman[97363]: 2026-01-21 13:46:30.086966537 +0000 UTC m=+0.137265414 container died 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f41dfaab02671b937ce035cb66d48533c351a2ba5f31cfe76e6258f624e20932-merged.mount: Deactivated successfully.
Jan 21 13:46:30 compute-0 python3[97375]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:30 compute-0 podman[97363]: 2026-01-21 13:46:30.131172664 +0000 UTC m=+0.181471501 container remove 337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_liskov, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 13:46:30 compute-0 systemd[1]: libpod-conmon-337a29c648fd501b89b11ac6986b44c8477d0e60a199e9cc911ebc196717722c.scope: Deactivated successfully.
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.206610855 +0000 UTC m=+0.061282751 container create 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:30 compute-0 systemd[1]: Started libpod-conmon-6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa.scope.
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.178601429 +0000 UTC m=+0.033273315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a305a0f1f0498ea0c86306810caad44856bec9ab72f28553ae127345a44c79f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a305a0f1f0498ea0c86306810caad44856bec9ab72f28553ae127345a44c79f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 podman[97423]: 2026-01-21 13:46:30.296095995 +0000 UTC m=+0.052127499 container create bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.307970121 +0000 UTC m=+0.162641997 container init 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.315909483 +0000 UTC m=+0.170581379 container start 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.321030087 +0000 UTC m=+0.175702013 container attach 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:30 compute-0 systemd[1]: Started libpod-conmon-bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd.scope.
Jan 21 13:46:30 compute-0 podman[97423]: 2026-01-21 13:46:30.271183433 +0000 UTC m=+0.027214917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:30 compute-0 podman[97423]: 2026-01-21 13:46:30.402026212 +0000 UTC m=+0.158057756 container init bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:46:30 compute-0 podman[97423]: 2026-01-21 13:46:30.412200908 +0000 UTC m=+0.168232412 container start bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 13:46:30 compute-0 podman[97423]: 2026-01-21 13:46:30.417163067 +0000 UTC m=+0.173194621 container attach bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:30 compute-0 ceph-mon[75031]: pgmap v88: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 13:46:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 21 13:46:30 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618331159' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 21 13:46:30 compute-0 festive_yonath[97430]: mimic
Jan 21 13:46:30 compute-0 systemd[1]: libpod-6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa.scope: Deactivated successfully.
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.761793747 +0000 UTC m=+0.616465703 container died 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a305a0f1f0498ea0c86306810caad44856bec9ab72f28553ae127345a44c79f-merged.mount: Deactivated successfully.
Jan 21 13:46:30 compute-0 podman[97403]: 2026-01-21 13:46:30.820621867 +0000 UTC m=+0.675293773 container remove 6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa (image=quay.io/ceph/ceph:v20, name=festive_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:46:30 compute-0 systemd[1]: libpod-conmon-6b4e1655a4bcd482fc3abb7deaf33a17ec24397dec5017ee58f172d4745d01aa.scope: Deactivated successfully.
Jan 21 13:46:30 compute-0 sudo[97368]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:30 compute-0 jovial_kapitsa[97444]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:46:30 compute-0 jovial_kapitsa[97444]: --> All data devices are unavailable
Jan 21 13:46:31 compute-0 systemd[1]: libpod-bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd.scope: Deactivated successfully.
Jan 21 13:46:31 compute-0 podman[97423]: 2026-01-21 13:46:31.021097046 +0000 UTC m=+0.777128580 container died bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-41be294011deddcce779bb0ed4206fbe49012a209b05f2e8f5bacf033206ca97-merged.mount: Deactivated successfully.
Jan 21 13:46:31 compute-0 podman[97423]: 2026-01-21 13:46:31.081823681 +0000 UTC m=+0.837855185 container remove bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:46:31 compute-0 systemd[1]: libpod-conmon-bd92f807d3fff29ce06763396f9016440ba1fe7bcc8b7bd4e9f2e704278f14dd.scope: Deactivated successfully.
Jan 21 13:46:31 compute-0 sudo[97304]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:31 compute-0 sudo[97508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:31 compute-0 sudo[97508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:31 compute-0 sudo[97508]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:31 compute-0 sudo[97533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:46:31 compute-0 sudo[97533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 11 KiB/s wr, 211 op/s
Jan 21 13:46:31 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 37e12876-a85b-42c9-8ae6-94fa3a820be5 (Global Recovery Event) in 10 seconds
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.613639958 +0000 UTC m=+0.048444890 container create e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:31 compute-0 systemd[1]: Started libpod-conmon-e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4.scope.
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.587676861 +0000 UTC m=+0.022481813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:31 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/618331159' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.714306349 +0000 UTC m=+0.149111271 container init e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.726475443 +0000 UTC m=+0.161280335 container start e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.729945976 +0000 UTC m=+0.164750958 container attach e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:46:31 compute-0 serene_gagarin[97584]: 167 167
Jan 21 13:46:31 compute-0 systemd[1]: libpod-e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4.scope: Deactivated successfully.
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.734192848 +0000 UTC m=+0.168997740 container died e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-28336d4a267a254d44df21562f1273742de6dbd485b5b97e503e51dd10bb4021-merged.mount: Deactivated successfully.
Jan 21 13:46:31 compute-0 sudo[97619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyurjoasbpcekwdqwoeddipcbjqnjrtr ; /usr/bin/python3'
Jan 21 13:46:31 compute-0 podman[97568]: 2026-01-21 13:46:31.772819991 +0000 UTC m=+0.207624873 container remove e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 13:46:31 compute-0 sudo[97619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:46:31 compute-0 systemd[1]: libpod-conmon-e998e028748ca4ed91f9b847e199ba4e5056807b9c9d07fa9d3f41d1d6a3ecf4.scope: Deactivated successfully.
Jan 21 13:46:31 compute-0 python3[97625]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:46:31 compute-0 podman[97633]: 2026-01-21 13:46:31.941348629 +0000 UTC m=+0.043625004 container create 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:31 compute-0 podman[97640]: 2026-01-21 13:46:31.973060315 +0000 UTC m=+0.050548981 container create cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 13:46:31 compute-0 systemd[1]: Started libpod-conmon-35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962.scope.
Jan 21 13:46:31 compute-0 systemd[1]: Started libpod-conmon-cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09.scope.
Jan 21 13:46:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:32 compute-0 podman[97633]: 2026-01-21 13:46:31.922698079 +0000 UTC m=+0.024974514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:32 compute-0 podman[97633]: 2026-01-21 13:46:32.021524724 +0000 UTC m=+0.123801119 container init 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 13:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006d66e62ea34b9883aacc3a981b68561b42d94f9cb961df6d72eebd811deb41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006d66e62ea34b9883aacc3a981b68561b42d94f9cb961df6d72eebd811deb41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:32 compute-0 podman[97633]: 2026-01-21 13:46:32.030153672 +0000 UTC m=+0.132430057 container start 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 13:46:32 compute-0 podman[97633]: 2026-01-21 13:46:32.034535159 +0000 UTC m=+0.136811574 container attach 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 13:46:32 compute-0 podman[97640]: 2026-01-21 13:46:32.038378971 +0000 UTC m=+0.115867647 container init cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:46:32 compute-0 podman[97640]: 2026-01-21 13:46:32.043815262 +0000 UTC m=+0.121303938 container start cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:46:32 compute-0 podman[97640]: 2026-01-21 13:46:31.948231185 +0000 UTC m=+0.025719891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:46:32 compute-0 podman[97640]: 2026-01-21 13:46:32.04829866 +0000 UTC m=+0.125787336 container attach cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 13:46:32 compute-0 admiring_merkle[97665]: {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:     "0": [
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:         {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "devices": [
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "/dev/loop3"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             ],
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_name": "ceph_lv0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_size": "21470642176",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "name": "ceph_lv0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "tags": {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.crush_device_class": "",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.encrypted": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osd_id": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.type": "block",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.vdo": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.with_tpm": "0"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             },
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "type": "block",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "vg_name": "ceph_vg0"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:         }
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:     ],
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:     "1": [
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:         {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "devices": [
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "/dev/loop4"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             ],
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_name": "ceph_lv1",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_size": "21470642176",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "name": "ceph_lv1",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "tags": {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.crush_device_class": "",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.encrypted": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osd_id": "1",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.type": "block",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.vdo": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.with_tpm": "0"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             },
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "type": "block",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "vg_name": "ceph_vg1"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:         }
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:     ],
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:     "2": [
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:         {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "devices": [
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "/dev/loop5"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             ],
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_name": "ceph_lv2",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_size": "21470642176",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "name": "ceph_lv2",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "tags": {
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.cluster_name": "ceph",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.crush_device_class": "",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.encrypted": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.objectstore": "bluestore",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osd_id": "2",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.type": "block",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.vdo": "0",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:                 "ceph.with_tpm": "0"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             },
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "type": "block",
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:             "vg_name": "ceph_vg2"
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:         }
Jan 21 13:46:32 compute-0 admiring_merkle[97665]:     ]
Jan 21 13:46:32 compute-0 admiring_merkle[97665]: }
Jan 21 13:46:32 compute-0 systemd[1]: libpod-35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962.scope: Deactivated successfully.
Jan 21 13:46:32 compute-0 podman[97633]: 2026-01-21 13:46:32.318450051 +0000 UTC m=+0.420726436 container died 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5771479f22c59a212b39f764ad682b8efa116be2b81db5d21a99796675ded7-merged.mount: Deactivated successfully.
Jan 21 13:46:32 compute-0 podman[97633]: 2026-01-21 13:46:32.373316306 +0000 UTC m=+0.475592701 container remove 35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:32 compute-0 systemd[1]: libpod-conmon-35a29146015c110817c61cb0345ad537d689961792d01dc2690a1ae23edc1962.scope: Deactivated successfully.
Jan 21 13:46:32 compute-0 sudo[97533]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:32 compute-0 sudo[97709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:46:32 compute-0 sudo[97709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:32 compute-0 sudo[97709]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 21 13:46:32 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516634605' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 21 13:46:32 compute-0 eager_brattain[97668]: 
Jan 21 13:46:32 compute-0 sudo[97734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:46:32 compute-0 sudo[97734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:32 compute-0 eager_brattain[97668]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 21 13:46:32 compute-0 systemd[1]: libpod-cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09.scope: Deactivated successfully.
Jan 21 13:46:32 compute-0 podman[97640]: 2026-01-21 13:46:32.569930062 +0000 UTC m=+0.647418748 container died cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 13:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-006d66e62ea34b9883aacc3a981b68561b42d94f9cb961df6d72eebd811deb41-merged.mount: Deactivated successfully.
Jan 21 13:46:32 compute-0 podman[97640]: 2026-01-21 13:46:32.620921342 +0000 UTC m=+0.698410068 container remove cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09 (image=quay.io/ceph/ceph:v20, name=eager_brattain, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 13:46:32 compute-0 systemd[1]: libpod-conmon-cd4c3b24dce2e57f81fa953eb54ff795e5ed7c0d7f13c65c582c8b793b2b6d09.scope: Deactivated successfully.
Jan 21 13:46:32 compute-0 sudo[97619]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:32 compute-0 ceph-mon[75031]: pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 11 KiB/s wr, 211 op/s
Jan 21 13:46:32 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2516634605' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.841082067 +0000 UTC m=+0.053896262 container create 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:32 compute-0 systemd[1]: Started libpod-conmon-858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317.scope.
Jan 21 13:46:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.823117664 +0000 UTC m=+0.035931879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.925078845 +0000 UTC m=+0.137893080 container init 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.936252175 +0000 UTC m=+0.149066400 container start 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.940582989 +0000 UTC m=+0.153397184 container attach 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 13:46:32 compute-0 gallant_sinoussi[97800]: 167 167
Jan 21 13:46:32 compute-0 systemd[1]: libpod-858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317.scope: Deactivated successfully.
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.944046702 +0000 UTC m=+0.156860917 container died 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d971799eeeffb1be6369a3510b5933dd3b329e35e99388a924c6119cc5aa7c6-merged.mount: Deactivated successfully.
Jan 21 13:46:32 compute-0 podman[97784]: 2026-01-21 13:46:32.995887164 +0000 UTC m=+0.208701359 container remove 858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:46:33 compute-0 systemd[1]: libpod-conmon-858da3cc31fe5dc7db36e8e9c15ca60f7d0dfc9089fb4988a1052ec1d1d6a317.scope: Deactivated successfully.
Jan 21 13:46:33 compute-0 podman[97825]: 2026-01-21 13:46:33.19250016 +0000 UTC m=+0.048344368 container create a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:33 compute-0 systemd[1]: Started libpod-conmon-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope.
Jan 21 13:46:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:46:33 compute-0 podman[97825]: 2026-01-21 13:46:33.174356742 +0000 UTC m=+0.030200970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:46:33 compute-0 podman[97825]: 2026-01-21 13:46:33.322281292 +0000 UTC m=+0.178125510 container init a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:46:33 compute-0 podman[97825]: 2026-01-21 13:46:33.330249705 +0000 UTC m=+0.186093923 container start a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:46:33 compute-0 podman[97825]: 2026-01-21 13:46:33.333844782 +0000 UTC m=+0.189689000 container attach a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:46:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 9.6 KiB/s wr, 181 op/s
Jan 21 13:46:34 compute-0 lvm[97918]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:46:34 compute-0 lvm[97918]: VG ceph_vg0 finished
Jan 21 13:46:34 compute-0 lvm[97920]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:46:34 compute-0 lvm[97920]: VG ceph_vg1 finished
Jan 21 13:46:34 compute-0 lvm[97922]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:46:34 compute-0 lvm[97922]: VG ceph_vg2 finished
Jan 21 13:46:34 compute-0 charming_kirch[97841]: {}
Jan 21 13:46:34 compute-0 systemd[1]: libpod-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope: Deactivated successfully.
Jan 21 13:46:34 compute-0 systemd[1]: libpod-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope: Consumed 1.364s CPU time.
Jan 21 13:46:34 compute-0 podman[97825]: 2026-01-21 13:46:34.11841501 +0000 UTC m=+0.974259258 container died a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 13:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b10f206e6744462faf2fbe57c7353b862d7a81c1b76926fb9772c9d9fa19ce67-merged.mount: Deactivated successfully.
Jan 21 13:46:34 compute-0 podman[97825]: 2026-01-21 13:46:34.424835687 +0000 UTC m=+1.280679905 container remove a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kirch, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:46:34 compute-0 sudo[97734]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:46:34 compute-0 systemd[1]: libpod-conmon-a9b43be17c8ea34e923c3d5ac2499e80027e9e71f9aae1d47779cc9ce9d60a9e.scope: Deactivated successfully.
Jan 21 13:46:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:46:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:34 compute-0 sudo[97939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:46:34 compute-0 sudo[97939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:46:34 compute-0 sudo[97939]: pam_unix(sudo:session): session closed for user root
Jan 21 13:46:34 compute-0 ceph-mon[75031]: pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 9.6 KiB/s wr, 181 op/s
Jan 21 13:46:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Jan 21 13:46:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:36 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 6 completed events
Jan 21 13:46:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:46:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:36 compute-0 ceph-mon[75031]: pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Jan 21 13:46:36 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 6.7 KiB/s wr, 144 op/s
Jan 21 13:46:38 compute-0 ceph-mon[75031]: pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 6.7 KiB/s wr, 144 op/s
Jan 21 13:46:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:46:39
Jan 21 13:46:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:46:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:46:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'images']
Jan 21 13:46:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:46:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 6.1 KiB/s wr, 131 op/s
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.041523176221679e-07 of space, bias 4.0, pg target 0.0009649827811466015 quantized to 16 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 21 13:46:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:46:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:46:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 21 13:46:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 21 13:46:41 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 21 13:46:41 compute-0 ceph-mon[75031]: pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 6.1 KiB/s wr, 131 op/s
Jan 21 13:46:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:41 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 6ea2c028-57ff-4cd8-a4dc-dd541e357001 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 13:46:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v95: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 17 op/s
Jan 21 13:46:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 21 13:46:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 21 13:46:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:42 compute-0 ceph-mon[75031]: osdmap e43: 3 total, 3 up, 3 in
Jan 21 13:46:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:42 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 21 13:46:42 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev ea0f3e94-5f24-4874-b858-f72380263c3a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 13:46:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:42 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44 pruub=15.468605995s) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active pruub 80.815086365s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:42 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44 pruub=15.468605995s) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown pruub 80.815086365s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 21 13:46:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 21 13:46:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 21 13:46:43 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 309baaee-8c82-40b7-82ca-97257dcf4e62 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 13:46:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:43 compute-0 ceph-mon[75031]: pgmap v95: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 17 op/s
Jan 21 13:46:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:43 compute-0 ceph-mon[75031]: osdmap e44: 3 total, 3 up, 3 in
Jan 21 13:46:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.0( empty local-lis/les=44/45 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v98: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 21 13:46:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 21 13:46:44 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 21 13:46:44 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev d29d1668-d126-47d3-b5d4-e19525facf01 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 13:46:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 21 13:46:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 21 13:46:44 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=15.843464851s) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active pruub 87.345634460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:44 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=15.843464851s) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown pruub 87.345634460s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: osdmap e45: 3 total, 3 up, 3 in
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:44 compute-0 ceph-mon[75031]: osdmap e46: 3 total, 3 up, 3 in
Jan 21 13:46:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 21 13:46:44 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=8.559959412s) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active pruub 85.138343811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:44 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=8.559959412s) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown pruub 85.138343811s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 21 13:46:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 21 13:46:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 21 13:46:45 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 21 13:46:45 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 7841d6e4-20a9-4b84-aa54-4dcb82cb141c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 21 13:46:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1e( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1c( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1d( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1b( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1f( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1a( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.7( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.19( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.6( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.18( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.3( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.5( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.8( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.a( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.b( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.4( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.2( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.9( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1f( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1e( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.e( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.f( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.d( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.c( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.10( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.7( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.b( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.12( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.11( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.13( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.15( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.16( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.17( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.8( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1b( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.a( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.14( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.5( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1a( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.9( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.4( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.c( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.d( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.f( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.e( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.10( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.11( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.12( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.13( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.15( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.16( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.17( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.18( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.7( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.6( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.19( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.1e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.3( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.18( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.4( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.2( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.8( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.5( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.9( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.0( empty local-lis/les=46/47 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.5( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.10( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.7( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.9( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.8( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.4( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.12( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.11( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.13( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.15( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.17( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.16( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 47 pg[3.14( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [1] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.1b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=46/47 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.13( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.10( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.12( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.11( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.16( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.15( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.17( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 47 pg[4.18( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:45 compute-0 ceph-mon[75031]: pgmap v98: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 21 13:46:45 compute-0 ceph-mon[75031]: osdmap e47: 3 total, 3 up, 3 in
Jan 21 13:46:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v101: 104 pgs: 2 peering, 62 unknown, 40 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 21 13:46:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 21 13:46:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:45 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 21 13:46:45 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 21 13:46:45 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 21 13:46:45 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 21 13:46:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 21 13:46:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 21 13:46:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 21 13:46:46 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 21 13:46:46 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev 3b62b1f7-921e-454b-bb9f-f74107de3873 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 13:46:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 21 13:46:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:46 compute-0 ceph-mon[75031]: 4.1f scrub starts
Jan 21 13:46:46 compute-0 ceph-mon[75031]: 4.1f scrub ok
Jan 21 13:46:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 21 13:46:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:46 compute-0 ceph-mon[75031]: osdmap e48: 3 total, 3 up, 3 in
Jan 21 13:46:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:46 compute-0 ceph-mgr[75322]: [progress WARNING root] Starting Global Recovery Event,110 pgs not in active + clean state
Jan 21 13:46:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 48 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=25/26 n=22 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=48 pruub=9.888830185s) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 36'38 mlcod 36'38 active pruub 89.168746948s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 48 pg[6.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=48 pruub=9.888830185s) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 36'38 mlcod 0'0 unknown pruub 89.168746948s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 21 13:46:47 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 49 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=25/26 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:47 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev e0201a3f-5d88-4b29-a15d-92205f718d90 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 13:46:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:47 compute-0 ceph-mon[75031]: pgmap v101: 104 pgs: 2 peering, 62 unknown, 40 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:47 compute-0 ceph-mon[75031]: 2.1e scrub starts
Jan 21 13:46:47 compute-0 ceph-mon[75031]: 2.1e scrub ok
Jan 21 13:46:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v104: 150 pgs: 2 peering, 108 unknown, 40 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=14.814599991s) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active pruub 85.865051270s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=14.814599991s) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown pruub 85.865051270s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 21 13:46:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 21 13:46:48 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 21 13:46:48 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev c6da06d2-209d-4646-8967-ce2e4e0098de (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 13:46:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 36'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 50 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=25/25 les/c/f=26/26/0 sis=48) [0] r=0 lpr=48 pi=[25,48)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.10( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.17( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.8( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.b( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.0( empty local-lis/les=48/50 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.6( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1b( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [2] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: osdmap e49: 3 total, 3 up, 3 in
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:48 compute-0 ceph-mon[75031]: osdmap e50: 3 total, 3 up, 3 in
Jan 21 13:46:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:48 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 21 13:46:48 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 21 13:46:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 21 13:46:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 21 13:46:49 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 21 13:46:49 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev b85d4030-337d-448f-a812-f898b5ae1624 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 13:46:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 21 13:46:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:49 compute-0 ceph-mon[75031]: pgmap v104: 150 pgs: 2 peering, 108 unknown, 40 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:49 compute-0 ceph-mon[75031]: 2.1c scrub starts
Jan 21 13:46:49 compute-0 ceph-mon[75031]: 2.1c scrub ok
Jan 21 13:46:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:49 compute-0 ceph-mon[75031]: osdmap e51: 3 total, 3 up, 3 in
Jan 21 13:46:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 21 13:46:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v107: 212 pgs: 1 peering, 93 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:49 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 21 13:46:49 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 21 13:46:49 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 21 13:46:49 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=50 pruub=8.076550484s) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active pruub 85.412124634s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50 pruub=11.691671371s) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 35'5 mlcod 35'5 active pruub 89.027275085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=50 pruub=8.076550484s) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown pruub 85.412124634s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 50 pg[8.0( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50 pruub=11.691671371s) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 35'5 mlcod 0'0 unknown pruub 89.027275085s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.2( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.3( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.4( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.5( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.6( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.7( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.8( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.9( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.a( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.b( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.c( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.d( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.e( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.f( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.10( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.12( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.11( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.13( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.14( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.15( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.16( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.17( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.18( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.19( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1a( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1b( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1c( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1d( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1e( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[7.1f( empty local-lis/les=26/27 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.2( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.3( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.4( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.5( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.7( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.8( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.9( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.10( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.11( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.12( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.13( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.15( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.14( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.16( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.17( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.18( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.19( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 51 pg[8.1f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:49 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 21 13:46:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 21 13:46:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 21 13:46:50 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev b51c9854-0c23-4ac9-ae5d-13a0adeaed63 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 6ea2c028-57ff-4cd8-a4dc-dd541e357001 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 6ea2c028-57ff-4cd8-a4dc-dd541e357001 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev ea0f3e94-5f24-4874-b858-f72380263c3a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event ea0f3e94-5f24-4874-b858-f72380263c3a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 309baaee-8c82-40b7-82ca-97257dcf4e62 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 309baaee-8c82-40b7-82ca-97257dcf4e62 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev d29d1668-d126-47d3-b5d4-e19525facf01 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event d29d1668-d126-47d3-b5d4-e19525facf01 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 7841d6e4-20a9-4b84-aa54-4dcb82cb141c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 7841d6e4-20a9-4b84-aa54-4dcb82cb141c (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev 3b62b1f7-921e-454b-bb9f-f74107de3873 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 3b62b1f7-921e-454b-bb9f-f74107de3873 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 21 13:46:50 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=38/39 n=9 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52 pruub=15.396927834s) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 41'17 mlcod 41'17 active pruub 88.580261230s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev e0201a3f-5d88-4b29-a15d-92205f718d90 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event e0201a3f-5d88-4b29-a15d-92205f718d90 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev c6da06d2-209d-4646-8967-ce2e4e0098de (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event c6da06d2-209d-4646-8967-ce2e4e0098de (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev b85d4030-337d-448f-a812-f898b5ae1624 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event b85d4030-337d-448f-a812-f898b5ae1624 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev b51c9854-0c23-4ac9-ae5d-13a0adeaed63 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 21 13:46:50 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event b51c9854-0c23-4ac9-ae5d-13a0adeaed63 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[9.0( v 42'483 (0'0,42'483] local-lis/les=36/37 n=210 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52 pruub=13.358799934s) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 42'482 mlcod 42'482 active pruub 91.048835754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52 pruub=15.396927834s) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 unknown pruub 88.580261230s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[9.0( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52 pruub=13.358799934s) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 42'482 mlcod 0'0 unknown pruub 91.048835754s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26600 space 0x562353353d40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b8fe80 space 0x562353335740 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23680 space 0x562353c68e40 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5ca80 space 0x562353360e40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b64100 space 0x562352de0240 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353be6000 space 0x562353361a40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a700 space 0x56235404ce40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09700 space 0x562352da5740 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bde600 space 0x562352d37140 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0c380 space 0x562352da4540 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09300 space 0x562352daa840 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a500 space 0x562352d00540 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23c80 space 0x56235331d440 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0c180 space 0x562352da4e40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09100 space 0x562352dab140 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b65f00 space 0x56235330b140 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26a00 space 0x562353335d40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23100 space 0x562352da6840 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b08a80 space 0x562352d00e40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a900 space 0x56235404d740 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26d80 space 0x562352cf2540 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0c800 space 0x562352da8e40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a300 space 0x562353451a40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09500 space 0x562352da9d40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5dc80 space 0x56235330a840 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0ab00 space 0x562352d01d40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72580 space 0x562352de1740 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72b00 space 0x562352da7140 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5da00 space 0x562353334840 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26680 space 0x562353c92540 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bdff00 space 0x56235331cb40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0ad00 space 0x562352cf5a40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72280 space 0x562353353140 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26b00 space 0x562353328540 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bdeb00 space 0x562352d36840 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5c480 space 0x562353c68240 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0d380 space 0x562353411440 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26d00 space 0x562352dda240 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5c680 space 0x562353360540 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353a83880 space 0x562353361440 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bf5300 space 0x562352cf5140 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b0ca00 space 0x562352da8540 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b72100 space 0x562353345d40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a080 space 0x562353451140 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b23580 space 0x562352de0b40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5ca00 space 0x562353352b40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5db80 space 0x562352dbcb40 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353be6c80 space 0x562353450840 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b5d300 space 0x562353352540 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b08c80 space 0x562352dd6240 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0a680 space 0x562353344e40 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26f80 space 0x562352da9740 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b09180 space 0x56235333f140 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b9f080 space 0x562353410b40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b08f00 space 0x562353345740 0x0~9a clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0af00 space 0x562352dd7d40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353bf5f80 space 0x562352cf4840 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353c0ab80 space 0x562352dbd440 0x0~98 clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x562354082d80) split_cache   moving buffer(0x562353b26880 space 0x562352daba40 0x0~6e clean)
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.17( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.19( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1d( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.13( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.7( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.8( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 35'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=50/52 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.d( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.7( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.14( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.16( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.19( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.10( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.17( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.12( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=26/26 les/c/f=27/27/0 sis=50) [1] r=0 lpr=50 pi=[26,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:50 compute-0 ceph-mon[75031]: 2.1d scrub starts
Jan 21 13:46:50 compute-0 ceph-mon[75031]: 2.1d scrub ok
Jan 21 13:46:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 21 13:46:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:50 compute-0 ceph-mon[75031]: osdmap e52: 3 total, 3 up, 3 in
Jan 21 13:46:50 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 21 13:46:50 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 21 13:46:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 21 13:46:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 21 13:46:51 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.14( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.17( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.16( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.11( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.15( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.10( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.13( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.12( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.d( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.c( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.f( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.b( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.2( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.9( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.e( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.a( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.8( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.3( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.6( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.4( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1a( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.5( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1b( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.18( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1e( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.19( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1f( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1c( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1d( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [2] r=0 lpr=52 pi=[38,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.7( v 42'483 lc 0'0 (0'0,42'483] local-lis/les=36/37 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.14( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 systemd[76413]: Starting Mark boot as successful...
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.10( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.12( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.2( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.0( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 42'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.a( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1a( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.4( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.5( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.18( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.1c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 53 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=42'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:51 compute-0 systemd[76413]: Finished Mark boot as successful.
Jan 21 13:46:51 compute-0 ceph-mon[75031]: pgmap v107: 212 pgs: 1 peering, 93 unknown, 118 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:51 compute-0 ceph-mon[75031]: 3.1b scrub starts
Jan 21 13:46:51 compute-0 ceph-mon[75031]: 3.1b scrub ok
Jan 21 13:46:51 compute-0 ceph-mon[75031]: 4.1e scrub starts
Jan 21 13:46:51 compute-0 ceph-mon[75031]: 4.1e scrub ok
Jan 21 13:46:51 compute-0 ceph-mon[75031]: osdmap e53: 3 total, 3 up, 3 in
Jan 21 13:46:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v110: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 21 13:46:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:51 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 16 completed events
Jan 21 13:46:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:46:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 21 13:46:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 21 13:46:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 21 13:46:52 compute-0 ceph-mon[75031]: 4.1d scrub starts
Jan 21 13:46:52 compute-0 ceph-mon[75031]: 4.1d scrub ok
Jan 21 13:46:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 21 13:46:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:46:52 compute-0 ceph-mon[75031]: 2.a scrub starts
Jan 21 13:46:52 compute-0 ceph-mon[75031]: 2.a scrub ok
Jan 21 13:46:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 21 13:46:52 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 21 13:46:52 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=15.032677650s) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active pruub 95.111892700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:52 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=15.032677650s) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown pruub 95.111892700s@ mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 155 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:53 compute-0 ceph-mon[75031]: pgmap v110: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 21 13:46:53 compute-0 ceph-mon[75031]: osdmap e54: 3 total, 3 up, 3 in
Jan 21 13:46:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 21 13:46:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 21 13:46:53 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.16( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.17( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.15( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.13( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.11( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.10( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.e( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.d( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.b( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.2( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.3( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.c( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.8( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.9( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.a( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.5( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.6( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.7( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.18( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.19( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1f( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1d( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.16( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.13( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.0( empty local-lis/les=54/55 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.5( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.7( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:53 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 21 13:46:53 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 21 13:46:53 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 21 13:46:53 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 21 13:46:53 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 21 13:46:54 compute-0 ceph-mon[75031]: pgmap v112: 305 pgs: 155 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:54 compute-0 ceph-mon[75031]: osdmap e55: 3 total, 3 up, 3 in
Jan 21 13:46:54 compute-0 ceph-mon[75031]: 3.1f scrub starts
Jan 21 13:46:54 compute-0 ceph-mon[75031]: 3.1f scrub ok
Jan 21 13:46:54 compute-0 ceph-mon[75031]: 4.1c scrub starts
Jan 21 13:46:54 compute-0 ceph-mon[75031]: 4.1c scrub ok
Jan 21 13:46:54 compute-0 ceph-mon[75031]: 2.6 scrub starts
Jan 21 13:46:54 compute-0 ceph-mon[75031]: 2.6 scrub ok
Jan 21 13:46:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 21 13:46:55 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.574015617s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.188224792s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561406136s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.175674438s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573951721s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.188224792s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1d( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561349869s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.175674438s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573716164s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.188201904s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565937996s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180458069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573684692s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.188201904s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1e( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565897942s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180458069s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389556885s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004173279s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389475822s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004173279s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573497772s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.188247681s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573477745s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.188247681s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389277458s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004257202s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389109612s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004112244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389085770s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004112244s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573266983s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.188316345s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.389241219s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004257202s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.573219299s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.188316345s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388842583s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004112244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388818741s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004112244s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.11( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565226555s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180580139s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388652802s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004013062s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.12( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565180779s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180610657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.11( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565173149s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180580139s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388619423s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004013062s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.12( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565144539s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180610657s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.13( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565045357s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180618286s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.13( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.565014839s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180618286s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388439178s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004112244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.14( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564970970s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180664062s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.388404846s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004112244s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.14( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564939499s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180664062s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578927994s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194740295s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578907013s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194740295s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.15( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564669609s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180671692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.15( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564632416s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180671692s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578118324s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194190979s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387817383s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003997803s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578083992s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194190979s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387792587s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003997803s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.16( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564449310s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180702209s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.16( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564416885s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180702209s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576922417s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.193344116s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576900482s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.193344116s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387430191s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003913879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.387395859s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003913879s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.9( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564129829s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180755615s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.9( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.564108849s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180755615s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386993408s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003784180s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386961937s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003784180s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577178955s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194015503s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386891365s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003829956s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577104568s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194015503s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386854172s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003829956s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563767433s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180786133s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578066826s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195190430s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.578038216s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195190430s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.7( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563652039s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180862427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577312469s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.194526672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.c( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563675880s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180786133s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.7( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563619614s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577281952s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.194526672s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386214256s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003608704s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386193275s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003608704s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386737823s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.004241943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563352585s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180862427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.386703491s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.004241943s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577460289s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.195045471s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.f( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.563322067s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577426910s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.195045471s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575845718s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.193817139s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575806618s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.193817139s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.385487556s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003616333s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.5( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.562740326s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180923462s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.5( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.562505722s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180923462s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384768486s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003486633s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577229500s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195953369s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384731293s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003486633s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.577185631s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195953369s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.4( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.562129021s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.180946350s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.4( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561977386s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.180946350s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.385453224s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003616333s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.3( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561805725s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181121826s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384226799s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003570557s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384200096s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003570557s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576145172s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.195617676s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384104729s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003601074s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.2( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561473846s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181007385s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.576097488s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.195617676s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.384060860s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003601074s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575691223s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.195297241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.2( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561427116s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181007385s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575486183s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.195297241s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383355141s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003334045s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383328438s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003334045s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561014175s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181030273s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575405121s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195426941s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575377464s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195426941s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.3( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.561057091s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181121826s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560978889s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181030273s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383260727s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003448486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383241653s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003448486s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575750351s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.196014404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383036613s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003326416s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575729370s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.196014404s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.383007050s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003326416s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575522423s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195991516s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382740021s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003219604s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382728577s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003250122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382686615s) [1] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003219604s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575555801s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.196105957s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575508118s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 active pruub 90.196075439s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575522423s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.196105957s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575474739s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 90.196075439s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560396194s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181091309s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.1a( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560376167s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181091309s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575473785s) [1] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195991516s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382456779s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003234863s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382434845s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003234863s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575113297s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.195976257s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.575053215s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.195976257s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.19( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560072899s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181106567s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382429123s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 active pruub 90.003524780s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.19( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.560041428s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181106567s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.574926376s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 active pruub 90.196052551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.382410049s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003524780s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.574903488s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 90.196052551s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.18( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.559853554s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 active pruub 87.181106567s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[5.18( empty local-lis/les=48/50 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.559838295s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=0'0 unknown NOTIFY pruub 87.181106567s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56 pruub=11.381968498s) [0] r=-1 lpr=56 pi=[44,56)/1 crt=0'0 unknown NOTIFY pruub 90.003250122s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.11( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.17( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.13( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.15( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.12( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.1a( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.19( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.1e( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.18( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.19( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.16( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.6( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.d( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.9( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.3( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.8( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.b( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.5( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.2( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.7( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.a( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.15( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.1d( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.4( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.c( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.4( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.9( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.1c( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.4( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.f( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.f( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.7( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.6( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.7( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.1( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.2( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.11( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.10( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.1f( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.17( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[2.1b( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.13( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.12( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.2( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.d( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.1d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.e( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.b( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.8( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.1a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[10.14( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.18( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.1( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[5.19( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368993759s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515510559s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.559500694s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706054688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544999123s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.691581726s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.559462547s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706054688s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544964790s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.691581726s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.16( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368938446s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515510559s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.959045410s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106040955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.958992958s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106040955s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544333458s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.691574097s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544312477s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.691574097s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368309975s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515602112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.368269920s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515602112s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544124603s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.691528320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.544080734s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.691528320s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.561562538s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709175110s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.1e( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367938995s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515579224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.561527252s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709175110s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954565048s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.102287292s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367905617s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515579224s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.15( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954534531s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.102287292s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[5.14( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.556081772s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.703994751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954298973s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.102249146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.13( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.556056023s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.703994751s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954261780s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.102249146s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[2.11( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.557409286s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705558777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[10.16( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.557368279s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705558777s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.370068550s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292808533s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.370034218s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292808533s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369175911s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292221069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369147301s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292221069s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.14( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369690895s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292770386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.14( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369644165s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292770386s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.12( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369027138s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292236328s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361879349s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.511253357s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361842155s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.511253357s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556145668s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705642700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556205750s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705650330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.1a( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556103706s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705642700s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.556076050s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705650330s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956342697s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.105949402s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.1e( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956317902s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.105949402s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.559554100s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709350586s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555890083s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705718994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555866241s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705718994s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956044197s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.105979919s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.956008911s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.105979919s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.12( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.369009972s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292236328s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.11( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367939949s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292259216s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.10( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367845535s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292190552s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.10( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367817879s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292190552s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.11( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367897987s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292259216s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.1d( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367544174s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.292160034s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.367481232s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.292160034s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366911888s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291801453s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366884232s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291801453s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.525283813s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450225830s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.525240898s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450225830s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366933823s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291847229s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.2( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366449356s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291748047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.f( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366568565s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291847229s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.2( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.366422653s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291748047s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524724960s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450088501s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365671158s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291137695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524532318s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450088501s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555399895s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705749512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.555373192s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705749512s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.559520721s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709350586s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.558624268s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709266663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.558599472s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709266663s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365021706s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515914917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364998817s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515914917s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554696083s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705848694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554672241s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705848694s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365622520s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291137695s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954703331s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106155396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.525206566s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450164795s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954679489s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106155396s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554334641s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705886841s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363949776s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515586853s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524435043s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450164795s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553997040s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705886841s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363672256s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515586853s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.4( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365040779s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290977478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.557277679s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709304810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.524024010s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449981689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.557236671s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709304810s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523996353s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449981689s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953858376s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106048584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.4( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.365002632s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290977478s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953831673s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106048584s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553622246s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705924988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.9( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364850998s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290985107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553588867s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705924988s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363206863s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515594482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523867607s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.450050354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363186836s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515594482s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.9( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364809990s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290985107s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523848534s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.450050354s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553456306s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706092834s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364743233s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291030884s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553420067s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706092834s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364717484s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291030884s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953078270s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106094360s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523504257s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449890137s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552910805s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.705947876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953042030s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106094360s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523486137s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449890137s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552885056s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.705947876s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364090919s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290573120s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552580833s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705924988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552537918s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705924988s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364062309s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290573120s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363142014s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516761780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363111496s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516761780s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523032188s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449851990s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.523010254s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449851990s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.7( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363712311s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290596008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.7( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363683701s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290596008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.522802353s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 active pruub 96.449867249s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363575935s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290534973s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56 pruub=8.522778511s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 96.449867249s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.8( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363784790s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.290969849s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.8( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.363761902s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290969849s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364582062s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.291725159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.364463806s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.291725159s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359720230s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 101.287040710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.1c( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359689713s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.287040710s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.14( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=46/47 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.362443924s) [1] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 101.290534973s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.1f( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.18( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.13( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.11( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.e( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552035332s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.705978394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551997185s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.705978394s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555334091s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709403992s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555305481s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709403992s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.12( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.951875687s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106101990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.951852798s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106101990s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361502647s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515792847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361481667s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515792847s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555057526s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709495544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.954181671s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108642578s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.555035591s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709495544s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.951672554s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106094360s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.361167908s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515792847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551523209s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706146240s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.950963974s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106094360s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.953539848s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108642578s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.554125786s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709457397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.554110527s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709457397s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.11( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.550776482s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706192017s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.550748825s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706192017s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.360742569s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516227722s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.360729218s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516227722s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.1b( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949995995s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106147766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949970245s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106147766s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549811363s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706161499s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549795151s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706161499s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359342575s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515792847s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.11( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949330330s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106147766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359076500s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.515907288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949302673s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106147766s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549243927s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706199646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549221992s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706199646s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.552453041s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709503174s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.552430153s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709503174s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.549049377s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706245422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.359049797s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.515907288s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.12( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548954010s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706230164s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548913956s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706146240s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548999786s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706245422s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548927307s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706230164s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948949814s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106262207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.14( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548859596s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706268311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948909760s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106262207s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548843384s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706268311s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548896790s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706405640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548877716s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706405640s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548740387s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706306458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.548727989s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706306458s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.18( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.18( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.1b( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.1c( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947653770s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106277466s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.10( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947635651s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106277466s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547548294s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706329346s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547554970s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706336975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947497368s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.106307983s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547513962s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706329346s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547524452s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706336975s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550722122s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709564209s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947466850s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.106307983s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550680161s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709564209s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357912064s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516860962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357893944s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516860962s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547277451s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706367493s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.551300049s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710418701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.551282883s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710418701s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.7( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547235489s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706367493s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.2( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547184944s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706382751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949555397s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108810425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949542046s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108810425s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547147751s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706382751s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.1f( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357487679s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516868591s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357476234s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516868591s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547074318s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.706504822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.547037125s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.706504822s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.358347893s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.517913818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.358334541s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.517913818s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550650597s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 42'483 active pruub 94.710289001s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.546828270s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.706474304s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949105263s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108840942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.550603867s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 94.710289001s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.949076653s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108840942s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553970337s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.713775635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=50/52 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.546689987s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.706474304s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553945541s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.713775635s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948929787s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108871460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554060936s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.714012146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.554046631s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.714012146s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948907852s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108871460s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357830048s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.517883301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553935051s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.714035034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.549504280s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.709617615s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357792854s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.517883301s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.549485207s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.709617615s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356709480s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.516860962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356665611s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.516860962s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553778648s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.714035034s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357501030s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.517868042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.357481003s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.517868042s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948439598s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108833313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948400497s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108833313s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948363304s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108863831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948298454s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108863831s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.948136330s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108894348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552777290s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.714118958s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552760124s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.714118958s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.947521210s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108894348s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552537918s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.714134216s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552523613s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.714134216s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548697472s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710365295s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553496361s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.715225220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548639297s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710365295s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553483963s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.715225220s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548583031s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710357666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356245995s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.518089294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548546791s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710357666s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356234550s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.518089294s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946858406s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108917236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356202126s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.518264771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946837425s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108917236s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553113937s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.715209961s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.553086281s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.715209961s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.356178284s) [2] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.518264771s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946687698s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 active pruub 97.108917236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=13.946675301s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 unknown NOTIFY pruub 97.108917236s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548074722s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 active pruub 94.710395813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=11.548045158s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 94.710395813s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.355474472s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 active pruub 96.518211365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=46/47 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=13.355379105s) [0] r=-1 lpr=56 pi=[46,56)/1 crt=0'0 unknown NOTIFY pruub 96.518211365s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552412033s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 active pruub 93.715270996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=50/52 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.552382469s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=0'0 unknown NOTIFY pruub 93.715270996s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551445961s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 active pruub 93.715240479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=50/52 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56 pruub=10.551402092s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 93.715240479s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.12( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.6( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.10( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.1( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.3( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.14( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.5( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.9( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.c( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.8( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.3( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.2( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.e( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.3( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.9( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.5( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.2( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.1( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.8( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.2( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.f( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.1( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.e( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.a( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.1b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.4( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.a( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.15( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.11( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.4( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.4( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[7.11( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.6( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.9( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[3.16( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.1( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[11.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[8.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.8( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.3( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.c( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.9( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.6( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.6( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.4( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.f( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 56 pg[4.1c( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.5( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.9( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.1a( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.12( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.18( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.15( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[8.1d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[3.17( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[7.13( empty local-lis/les=0/0 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 56 pg[11.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 21 13:46:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 56 pg[4.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:55 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 21 13:46:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:46:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 21 13:46:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 21 13:46:56 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.1a( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1b( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.18( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1a( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.1d( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.c( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.8( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=56/57 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.e( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.2( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.1( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.e( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.8( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.a( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=56/57 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.11( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.15( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.e( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.13( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.11( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[7.1c( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [2] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [2] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[52,57)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.11( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.14( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.13( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.8( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.3( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.b( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.e( v 53'19 lc 39'4 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.16( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.2( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.d( v 53'19 lc 39'5 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.5( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.1c( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.4( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.f( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.1d( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.1e( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.18( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.15( v 53'19 lc 39'3 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.9( v 53'19 lc 39'8 (0'0,53'19] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [0] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.1b( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.1( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.17( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.6( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.9( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.13( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.15( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.15( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.17( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.12( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.16( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.12( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.9( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=56/57 n=0 ec=50/26 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[3.1f( empty local-lis/les=56/57 n=0 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=56/57 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.f( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.d( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=56/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.5( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.a( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.3( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.9( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.4( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.7( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.1( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.6( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.11( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[2.1b( empty local-lis/les=56/57 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.1a( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.12( v 53'19 lc 41'17 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.18( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.14( v 53'19 lc 39'7 (0'0,53'19] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=56/57 n=0 ec=48/23 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 57 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=56/57 n=1 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=35'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=56/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=56) [1] r=0 lpr=56 pi=[52,56)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.4( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.d( v 38'39 lc 36'13 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.f( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.f( v 38'39 lc 36'1 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.7( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.5( v 38'39 lc 36'11 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.7( v 38'39 lc 36'21 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.12( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 57 pg[4.10( empty local-lis/les=56/57 n=0 ec=46/21 lis/c=46/46 les/c/f=47/47/0 sis=56) [1] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:56 compute-0 ceph-mon[75031]: pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:46:56 compute-0 ceph-mon[75031]: osdmap e56: 3 total, 3 up, 3 in
Jan 21 13:46:56 compute-0 ceph-mon[75031]: 4.16 scrub starts
Jan 21 13:46:56 compute-0 ceph-mon[75031]: 4.16 scrub ok
Jan 21 13:46:56 compute-0 ceph-mon[75031]: osdmap e57: 3 total, 3 up, 3 in
Jan 21 13:46:56 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event 9a8c3382-30e7-4c11-b909-be7da62b8856 (Global Recovery Event) in 10 seconds
Jan 21 13:46:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 21 13:46:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 21 13:46:57 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 13:46:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 21 13:46:57 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 13:46:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 21 13:46:57 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=53'484 lcod 42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 58 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[52,57)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 13:46:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 21 13:46:57 compute-0 ceph-mon[75031]: osdmap e58: 3 total, 3 up, 3 in
Jan 21 13:46:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 21 13:46:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 13:46:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 13:46:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 21 13:46:58 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998245239s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008323669s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998225212s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008255005s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998094559s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008239746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998041153s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008239746s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.998172760s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008323669s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.997398376s) [0] async=[0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 active pruub 101.008316040s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.997216225s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008316040s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59 pruub=14.997075081s) [0] r=-1 lpr=59 pi=[52,59)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008255005s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.660623550s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.448310852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.660583496s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.448310852s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662180901s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.450340271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662142754s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.450340271s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662052155s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.450317383s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.662023544s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.450317383s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.661510468s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 active pruub 104.450065613s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:58 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 59 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59 pruub=13.661439896s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 104.450065613s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:58 compute-0 ceph-mon[75031]: pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:46:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 13:46:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 21 13:46:58 compute-0 ceph-mon[75031]: osdmap e59: 3 total, 3 up, 3 in
Jan 21 13:46:58 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 21 13:46:58 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 21 13:46:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 524 B/s, 6 objects/s recovering
Jan 21 13:46:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 21 13:46:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 21 13:46:59 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700810432s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010345459s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700797081s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010353088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698727608s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008323669s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700737000s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010345459s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.700737953s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010353088s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698695183s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008323669s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699062347s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008819580s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699033737s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008819580s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698680878s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008621216s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698640823s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008621216s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698682785s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008827209s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698513031s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008689880s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698637009s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008827209s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698471069s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008689880s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698368073s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=53'484 lcod 58'485 active pruub 101.008743286s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699906349s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010353088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=57/58 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698307037s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=53'484 lcod 58'485 unknown NOTIFY pruub 101.008743286s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699880600s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010353088s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699876785s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.010360718s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.699830055s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.010360718s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698328972s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008880615s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698302269s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008880615s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698155403s) [0] async=[0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 active pruub 101.008804321s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=57/58 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60 pruub=13.698122025s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 101.008804321s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=59/60 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-mon[75031]: 2.1a scrub starts
Jan 21 13:46:59 compute-0 ceph-mon[75031]: 2.1a scrub ok
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=59/60 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.e( v 38'39 lc 36'19 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 60 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.9( v 42'483 (0'0,42'483] local-lis/les=59/60 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:46:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 60 pg[9.11( v 42'483 (0'0,42'483] local-lis/les=59/60 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=59) [0] r=0 lpr=59 pi=[52,59)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 21 13:47:00 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 21 13:47:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 21 13:47:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 21 13:47:00 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.3( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.13( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.b( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.5( v 58'486 (0'0,58'486] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=58'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.d( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.19( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1d( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 61 pg[9.1b( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=57/52 les/c/f=58/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:00 compute-0 ceph-mon[75031]: pgmap v120: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 524 B/s, 6 objects/s recovering
Jan 21 13:47:00 compute-0 ceph-mon[75031]: osdmap e60: 3 total, 3 up, 3 in
Jan 21 13:47:00 compute-0 ceph-mon[75031]: 4.17 scrub starts
Jan 21 13:47:00 compute-0 ceph-mon[75031]: 4.17 scrub ok
Jan 21 13:47:00 compute-0 ceph-mon[75031]: osdmap e61: 3 total, 3 up, 3 in
Jan 21 13:47:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 12 active+remapped, 8 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Jan 21 13:47:01 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 17 completed events
Jan 21 13:47:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 13:47:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:02 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 21 13:47:02 compute-0 ceph-mon[75031]: pgmap v123: 305 pgs: 12 active+remapped, 8 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Jan 21 13:47:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:02 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 21 13:47:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 12 active+remapped, 8 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s, 1 keys/s, 20 objects/s recovering
Jan 21 13:47:03 compute-0 ceph-mon[75031]: 4.15 scrub starts
Jan 21 13:47:03 compute-0 ceph-mon[75031]: 4.15 scrub ok
Jan 21 13:47:04 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 21 13:47:04 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 21 13:47:04 compute-0 ceph-mon[75031]: pgmap v124: 305 pgs: 12 active+remapped, 8 peering, 285 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s, 1 keys/s, 20 objects/s recovering
Jan 21 13:47:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 893 B/s, 1 keys/s, 17 objects/s recovering
Jan 21 13:47:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 21 13:47:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 13:47:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 21 13:47:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 13:47:05 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 21 13:47:05 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 21 13:47:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 21 13:47:05 compute-0 ceph-mon[75031]: 7.19 scrub starts
Jan 21 13:47:05 compute-0 ceph-mon[75031]: 7.19 scrub ok
Jan 21 13:47:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 13:47:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 21 13:47:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 13:47:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 13:47:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 21 13:47:05 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389625549s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.017753601s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389667511s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.017982483s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389375687s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.018028259s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389333725s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.018028259s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389297485s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.017982483s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389019966s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 active pruub 108.018058777s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.388999939s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.018058777s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:06 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 62 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.389514923s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=38'39 unknown NOTIFY pruub 108.017753601s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 62 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:06 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 21 13:47:06 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 21 13:47:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 21 13:47:06 compute-0 ceph-mon[75031]: pgmap v125: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 893 B/s, 1 keys/s, 17 objects/s recovering
Jan 21 13:47:06 compute-0 ceph-mon[75031]: 5.1f scrub starts
Jan 21 13:47:06 compute-0 ceph-mon[75031]: 5.1f scrub ok
Jan 21 13:47:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 13:47:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 21 13:47:06 compute-0 ceph-mon[75031]: osdmap e62: 3 total, 3 up, 3 in
Jan 21 13:47:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 21 13:47:06 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.7( v 38'39 lc 36'21 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=62/63 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 63 pg[6.f( v 38'39 lc 36'1 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=62) [0] r=0 lpr=62 pi=[56,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 13:47:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 21 13:47:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 13:47:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 21 13:47:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 13:47:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 21 13:47:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 13:47:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 13:47:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 21 13:47:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 21 13:47:07 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313432693s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 active pruub 112.450233459s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:07 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=48/50 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313391685s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 112.450233459s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:07 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313249588s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 active pruub 112.450508118s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:07 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 64 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64 pruub=12.313024521s) [1] r=-1 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 112.450508118s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:07 compute-0 ceph-mon[75031]: 8.16 scrub starts
Jan 21 13:47:07 compute-0 ceph-mon[75031]: 8.16 scrub ok
Jan 21 13:47:07 compute-0 ceph-mon[75031]: osdmap e63: 3 total, 3 up, 3 in
Jan 21 13:47:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 13:47:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 21 13:47:07 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 64 pg[6.c( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:07 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 64 pg[6.4( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 21 13:47:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 21 13:47:08 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 21 13:47:08 compute-0 ceph-mon[75031]: pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 13:47:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 13:47:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 21 13:47:08 compute-0 ceph-mon[75031]: osdmap e64: 3 total, 3 up, 3 in
Jan 21 13:47:08 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 65 pg[6.4( v 38'39 lc 36'15 (0'0,38'39] local-lis/les=64/65 n=2 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:08 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 65 pg[6.c( v 38'39 lc 36'17 (0'0,38'39] local-lis/les=64/65 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=64) [1] r=0 lpr=64 pi=[48,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Jan 21 13:47:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 21 13:47:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 13:47:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 21 13:47:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 13:47:09 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 21 13:47:09 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 21 13:47:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 21 13:47:09 compute-0 ceph-mon[75031]: osdmap e65: 3 total, 3 up, 3 in
Jan 21 13:47:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 13:47:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 21 13:47:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 13:47:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 13:47:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 21 13:47:09 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 21 13:47:10 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 21 13:47:10 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 21 13:47:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 21 13:47:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 21 13:47:10 compute-0 ceph-mon[75031]: pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Jan 21 13:47:10 compute-0 ceph-mon[75031]: 10.1f scrub starts
Jan 21 13:47:10 compute-0 ceph-mon[75031]: 10.1f scrub ok
Jan 21 13:47:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 13:47:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 21 13:47:10 compute-0 ceph-mon[75031]: osdmap e66: 3 total, 3 up, 3 in
Jan 21 13:47:10 compute-0 ceph-mon[75031]: 4.c scrub starts
Jan 21 13:47:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:47:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:47:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:47:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:47:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:47:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:47:11 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.361306190s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 active pruub 108.017868042s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:11 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 66 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:11 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.361262321s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 unknown NOTIFY pruub 108.017868042s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:11 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.360560417s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 active pruub 108.018013000s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:11 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 66 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=56/57 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=9.360506058s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=38'39 unknown NOTIFY pruub 108.018013000s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:11 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 66 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 488 B/s, 2 keys/s, 2 objects/s recovering
Jan 21 13:47:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 21 13:47:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 13:47:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 21 13:47:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 13:47:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 21 13:47:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 13:47:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 13:47:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 21 13:47:13 compute-0 ceph-mon[75031]: 5.10 scrub starts
Jan 21 13:47:13 compute-0 ceph-mon[75031]: 5.10 scrub ok
Jan 21 13:47:13 compute-0 ceph-mon[75031]: 4.c scrub ok
Jan 21 13:47:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 13:47:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.078385353s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.709602356s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.078169823s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.709602356s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077580452s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.709732056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077262878s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.709732056s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077105522s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.709732056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077071190s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.709732056s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077226639s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 active pruub 110.710494995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:13 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 67 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.077081680s) [2] r=-1 lpr=67 pi=[52,67)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 110.710494995s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:13 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 21 13:47:13 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:13 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:13 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:13 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67) [2] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:13 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 67 pg[6.5( v 38'39 lc 36'11 (0'0,38'39] local-lis/les=66/67 n=2 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:13 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 67 pg[6.d( v 38'39 lc 36'13 (0'0,38'39] local-lis/les=66/67 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 402 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 13:47:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 21 13:47:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 13:47:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 21 13:47:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 13:47:13 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 21 13:47:13 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 21 13:47:13 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 21 13:47:13 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 21 13:47:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 21 13:47:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 13:47:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 13:47:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 21 13:47:14 compute-0 ceph-mon[75031]: pgmap v133: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 488 B/s, 2 keys/s, 2 objects/s recovering
Jan 21 13:47:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 13:47:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 21 13:47:14 compute-0 ceph-mon[75031]: osdmap e67: 3 total, 3 up, 3 in
Jan 21 13:47:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 13:47:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 21 13:47:14 compute-0 ceph-mon[75031]: 10.1d scrub starts
Jan 21 13:47:14 compute-0 ceph-mon[75031]: 10.1d scrub ok
Jan 21 13:47:14 compute-0 ceph-mon[75031]: 4.0 scrub starts
Jan 21 13:47:14 compute-0 ceph-mon[75031]: 4.0 scrub ok
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 68 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[52,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.484229088s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 active pruub 117.064270020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=68 pruub=9.587678909s) [2] r=-1 lpr=68 pi=[59,68)/1 crt=42'483 active pruub 116.167907715s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=68 pruub=9.587616920s) [2] r=-1 lpr=68 pi=[59,68)/1 crt=42'483 unknown NOTIFY pruub 116.167907715s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=68) [2] r=0 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.483852386s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 unknown NOTIFY pruub 117.064270020s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.482997894s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 active pruub 117.064353943s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.482966423s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 unknown NOTIFY pruub 117.064353943s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.479993820s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 active pruub 117.060600281s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 68 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68 pruub=10.479021072s) [2] r=-1 lpr=68 pi=[60,68)/1 crt=42'483 unknown NOTIFY pruub 117.060600281s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68) [2] r=0 lpr=68 pi=[60,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68) [2] r=0 lpr=68 pi=[60,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=68) [2] r=0 lpr=68 pi=[60,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:14 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 21 13:47:14 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 21 13:47:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 21 13:47:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 21 13:47:15 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 69 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:15 compute-0 ceph-mon[75031]: pgmap v135: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 402 B/s, 1 keys/s, 2 objects/s recovering
Jan 21 13:47:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 13:47:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 21 13:47:15 compute-0 ceph-mon[75031]: osdmap e68: 3 total, 3 up, 3 in
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[60,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:15 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[59,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:15 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:15 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.e( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:15 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:15 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 69 pg[9.1e( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[52,68)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:15 compute-0 sudo[97989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bydzrwzmnzpuodwdbqolnnwgfykcrfhj ; /usr/bin/python3'
Jan 21 13:47:15 compute-0 sudo[97989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:47:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 0 objects/s recovering
Jan 21 13:47:15 compute-0 python3[97991]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:47:15 compute-0 podman[97992]: 2026-01-21 13:47:15.744408661 +0000 UTC m=+0.049783056 container create 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:47:15 compute-0 systemd[1]: Started libpod-conmon-4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b.scope.
Jan 21 13:47:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3878e462b35c765b9bed9e20a447c90005f06317252411a06e625a2c0127f696/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3878e462b35c765b9bed9e20a447c90005f06317252411a06e625a2c0127f696/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:15 compute-0 podman[97992]: 2026-01-21 13:47:15.72116859 +0000 UTC m=+0.026543085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:47:15 compute-0 podman[97992]: 2026-01-21 13:47:15.825403616 +0000 UTC m=+0.130778041 container init 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:47:15 compute-0 podman[97992]: 2026-01-21 13:47:15.836905324 +0000 UTC m=+0.142279729 container start 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:47:15 compute-0 podman[97992]: 2026-01-21 13:47:15.840928034 +0000 UTC m=+0.146302499 container attach 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 21 13:47:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 21 13:47:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 21 13:47:16 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.001122475s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 active pruub 118.653938293s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.001036644s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 118.653938293s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007452011s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 active pruub 118.661338806s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007366180s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 118.661338806s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007286072s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=69'484 lcod 69'484 active pruub 118.661315918s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=68/69 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.007211685s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=69'484 lcod 69'484 unknown NOTIFY pruub 118.661315918s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.006855011s) [2] async=[2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 42'483 active pruub 118.661369324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.006767273s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 118.661369324s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=69'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=69'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 70 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=0/0 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.7( v 42'483 (0'0,42'483] local-lis/les=69/70 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-mon[75031]: 4.3 scrub starts
Jan 21 13:47:16 compute-0 ceph-mon[75031]: 4.3 scrub ok
Jan 21 13:47:16 compute-0 ceph-mon[75031]: osdmap e69: 3 total, 3 up, 3 in
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=7 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[60,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 70 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 21 13:47:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 21 13:47:16 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=0/0 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=15.640702248s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=42'483 active pruub 124.440063477s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=15.640651703s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=42'483 unknown NOTIFY pruub 124.440063477s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.634465218s) [2] async=[2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 active pruub 124.433929443s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.634424210s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 124.433929443s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640273094s) [2] async=[2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 active pruub 124.439979553s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=69/70 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640201569s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 lcod 42'483 unknown NOTIFY pruub 124.439979553s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640139580s) [2] async=[2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 active pruub 124.440063477s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:16 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 71 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=69/70 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71 pruub=15.640014648s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=42'483 unknown NOTIFY pruub 124.440063477s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.e( v 69'485 (0'0,69'485] local-lis/les=70/71 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=69'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.6( v 42'483 (0'0,42'483] local-lis/les=70/71 n=7 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 71 pg[9.1e( v 69'484 (0'0,69'484] local-lis/les=70/71 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70) [2] r=0 lpr=70 pi=[52,70)/1 crt=69'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:16 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 21 13:47:16 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 21 13:47:17 compute-0 ceph-mon[75031]: pgmap v138: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 0 objects/s recovering
Jan 21 13:47:17 compute-0 ceph-mon[75031]: osdmap e70: 3 total, 3 up, 3 in
Jan 21 13:47:17 compute-0 ceph-mon[75031]: osdmap e71: 3 total, 3 up, 3 in
Jan 21 13:47:17 compute-0 ceph-mon[75031]: 10.1c scrub starts
Jan 21 13:47:17 compute-0 ceph-mon[75031]: 10.1c scrub ok
Jan 21 13:47:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Jan 21 13:47:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 21 13:47:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 21 13:47:17 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 21 13:47:17 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:17 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.7( v 70'484 (0'0,70'484] local-lis/les=71/72 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=70'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:17 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.17( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=69/59 les/c/f=70/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:17 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 72 pg[9.f( v 70'484 (0'0,70'484] local-lis/les=71/72 n=7 ec=52/36 lis/c=69/60 les/c/f=70/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=70'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:17 compute-0 dazzling_goldwasser[98007]: could not fetch user info: no user info saved
Jan 21 13:47:17 compute-0 systemd[1]: libpod-4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b.scope: Deactivated successfully.
Jan 21 13:47:17 compute-0 podman[97992]: 2026-01-21 13:47:17.661493997 +0000 UTC m=+1.966868422 container died 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 13:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3878e462b35c765b9bed9e20a447c90005f06317252411a06e625a2c0127f696-merged.mount: Deactivated successfully.
Jan 21 13:47:17 compute-0 podman[97992]: 2026-01-21 13:47:17.704179873 +0000 UTC m=+2.009554298 container remove 4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b (image=quay.io/ceph/ceph:v20, name=dazzling_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:47:17 compute-0 systemd[1]: libpod-conmon-4a90e01d9ea245e49c22b98e00ca0490889f68a8bebe92d8b8e95469bf06235b.scope: Deactivated successfully.
Jan 21 13:47:17 compute-0 sudo[97989]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 21 13:47:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 21 13:47:17 compute-0 sudo[98128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtwseagflqnnyzbevoifeqbmslkwjydz ; /usr/bin/python3'
Jan 21 13:47:17 compute-0 sudo[98128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:47:18 compute-0 python3[98130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.11763845 +0000 UTC m=+0.050669947 container create d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 13:47:18 compute-0 systemd[1]: Started libpod-conmon-d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3.scope.
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.091327802 +0000 UTC m=+0.024359379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 21 13:47:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a2b0effbf3107bf23fe92e5ba99312efa93251fcbd051b4686ae470fb91666/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a2b0effbf3107bf23fe92e5ba99312efa93251fcbd051b4686ae470fb91666/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.213931558 +0000 UTC m=+0.146963105 container init d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.22402539 +0000 UTC m=+0.157056897 container start d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.228407059 +0000 UTC m=+0.161438656 container attach d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]: {
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "user_id": "openstack",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "display_name": "openstack",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "email": "",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "suspended": 0,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "max_buckets": 1000,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "subusers": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "keys": [
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         {
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:             "user": "openstack",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:             "access_key": "FZ28JX2UU0J6W10EUYA1",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:             "secret_key": "srDBpy6DXA8pK55MZ5QaJF72pcBfM6bJTQjo7e80",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:             "active": true,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:             "create_date": "2026-01-21T13:47:18.478009Z"
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         }
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     ],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "swift_keys": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "caps": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "op_mask": "read, write, delete",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "default_placement": "",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "default_storage_class": "",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "placement_tags": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "bucket_quota": {
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "enabled": false,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "check_on_raw": false,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "max_size": -1,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "max_size_kb": 0,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "max_objects": -1
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     },
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "user_quota": {
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "enabled": false,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "check_on_raw": false,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "max_size": -1,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "max_size_kb": 0,
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:         "max_objects": -1
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     },
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "temp_url_keys": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "type": "rgw",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "mfa_ids": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "account_id": "",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "path": "/",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "create_date": "2026-01-21T13:47:18.477459Z",
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "tags": [],
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]:     "group_ids": []
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]: }
Jan 21 13:47:18 compute-0 compassionate_mcclintock[98146]: 
Jan 21 13:47:18 compute-0 systemd[1]: libpod-d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3.scope: Deactivated successfully.
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.509133457 +0000 UTC m=+0.442164964 container died d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 13:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a2b0effbf3107bf23fe92e5ba99312efa93251fcbd051b4686ae470fb91666-merged.mount: Deactivated successfully.
Jan 21 13:47:18 compute-0 podman[98131]: 2026-01-21 13:47:18.556291416 +0000 UTC m=+0.489322953 container remove d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3 (image=quay.io/ceph/ceph:v20, name=compassionate_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:18 compute-0 ceph-mon[75031]: pgmap v141: 305 pgs: 4 unknown, 301 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Jan 21 13:47:18 compute-0 ceph-mon[75031]: osdmap e72: 3 total, 3 up, 3 in
Jan 21 13:47:18 compute-0 ceph-mon[75031]: 4.19 scrub starts
Jan 21 13:47:18 compute-0 ceph-mon[75031]: 4.19 scrub ok
Jan 21 13:47:18 compute-0 sudo[98128]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:18 compute-0 systemd[1]: libpod-conmon-d99b4b658e7e9e643fcad454d91ea570882bf95bc7aacd18ff558338f9e39cb3.scope: Deactivated successfully.
Jan 21 13:47:18 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 21 13:47:18 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 21 13:47:18 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 21 13:47:18 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 21 13:47:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 74 op/s; 457 B/s, 11 objects/s recovering
Jan 21 13:47:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 21 13:47:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 13:47:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 21 13:47:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 13:47:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 21 13:47:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 13:47:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 13:47:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 21 13:47:19 compute-0 ceph-mon[75031]: 2.14 scrub starts
Jan 21 13:47:19 compute-0 ceph-mon[75031]: 2.14 scrub ok
Jan 21 13:47:19 compute-0 ceph-mon[75031]: 4.6 scrub starts
Jan 21 13:47:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 13:47:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 21 13:47:19 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 21 13:47:19 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 21 13:47:19 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 21 13:47:19 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 73 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73 pruub=8.519322395s) [2] r=-1 lpr=73 pi=[48,73)/1 crt=38'39 lcod 0'0 active pruub 120.453178406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:19 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 73 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=48/50 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73 pruub=8.519268990s) [2] r=-1 lpr=73 pi=[48,73)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 120.453178406s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:19 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 73 pg[6.8( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078841209s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=42'483 lcod 0'0 active pruub 118.709938049s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078646660s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 118.709938049s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078823090s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=72'486 lcod 72'486 active pruub 118.710662842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 73 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=11.078744888s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=72'486 lcod 72'486 unknown NOTIFY pruub 118.710662842s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 21 13:47:20 compute-0 ceph-mon[75031]: 4.6 scrub ok
Jan 21 13:47:20 compute-0 ceph-mon[75031]: pgmap v143: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 74 op/s; 457 B/s, 11 objects/s recovering
Jan 21 13:47:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 13:47:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 21 13:47:20 compute-0 ceph-mon[75031]: osdmap e73: 3 total, 3 up, 3 in
Jan 21 13:47:20 compute-0 ceph-mon[75031]: 3.1c scrub starts
Jan 21 13:47:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 21 13:47:20 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[52,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 74 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=73/74 n=1 ec=48/25 lis/c=48/48 les/c/f=50/50/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:20 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 74 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] r=0 lpr=74 pi=[52,74)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:20 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 21 13:47:20 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 21 13:47:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.0 KiB/s wr, 92 op/s; 400 B/s, 9 objects/s recovering
Jan 21 13:47:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 21 13:47:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 13:47:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 21 13:47:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 13:47:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 21 13:47:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 13:47:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 13:47:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 21 13:47:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 21 13:47:21 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=14.949493408s) [0] r=-1 lpr=75 pi=[56,75)/1 crt=38'39 lcod 0'0 active pruub 124.018341064s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:21 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=14.949440956s) [0] r=-1 lpr=75 pi=[56,75)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 124.018341064s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:21 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 21 13:47:21 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 75 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75) [0] r=0 lpr=75 pi=[56,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:21 compute-0 ceph-mon[75031]: 3.1c scrub ok
Jan 21 13:47:21 compute-0 ceph-mon[75031]: osdmap e74: 3 total, 3 up, 3 in
Jan 21 13:47:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 13:47:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 21 13:47:21 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 21 13:47:21 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=74/75 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[52,74)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:21 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 75 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=74/75 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[52,74)/1 crt=72'487 lcod 72'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 21 13:47:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 21 13:47:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 21 13:47:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=74/75 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.991138458s) [2] async=[2] r=-1 lpr=76 pi=[52,76)/1 crt=42'483 lcod 0'0 active pruub 125.083801270s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=74/75 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.991033554s) [2] r=-1 lpr=76 pi=[52,76)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 125.083801270s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=74/75 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.992052078s) [2] async=[2] r=-1 lpr=76 pi=[52,76)/1 crt=72'487 lcod 72'486 active pruub 125.085105896s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=74/75 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76 pruub=14.992010117s) [2] r=-1 lpr=76 pi=[52,76)/1 crt=72'487 lcod 72'486 unknown NOTIFY pruub 125.085105896s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:22 compute-0 ceph-mon[75031]: 8.17 scrub starts
Jan 21 13:47:22 compute-0 ceph-mon[75031]: 8.17 scrub ok
Jan 21 13:47:22 compute-0 ceph-mon[75031]: pgmap v146: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.0 KiB/s wr, 92 op/s; 400 B/s, 9 objects/s recovering
Jan 21 13:47:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 13:47:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 21 13:47:22 compute-0 ceph-mon[75031]: osdmap e75: 3 total, 3 up, 3 in
Jan 21 13:47:22 compute-0 ceph-mon[75031]: 10.1b scrub starts
Jan 21 13:47:22 compute-0 ceph-mon[75031]: 10.1b scrub ok
Jan 21 13:47:22 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 76 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=75/76 n=1 ec=48/25 lis/c=56/56 les/c/f=57/57/0 sis=75) [0] r=0 lpr=75 pi=[56,75)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 76 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 32 op/s
Jan 21 13:47:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 21 13:47:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 13:47:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 21 13:47:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 13:47:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 21 13:47:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 13:47:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 13:47:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 21 13:47:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 21 13:47:23 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 77 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=8.283657074s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=38'39 lcod 0'0 active pruub 119.388328552s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:23 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 77 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=8.283535957s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=38'39 lcod 0'0 unknown NOTIFY pruub 119.388328552s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:23 compute-0 ceph-mon[75031]: osdmap e76: 3 total, 3 up, 3 in
Jan 21 13:47:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 13:47:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 21 13:47:23 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 77 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:23 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 77 pg[9.18( v 72'487 (0'0,72'487] local-lis/les=76/77 n=6 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:23 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 77 pg[9.8( v 42'483 (0'0,42'483] local-lis/les=76/77 n=7 ec=52/36 lis/c=74/52 les/c/f=75/53/0 sis=76) [2] r=0 lpr=76 pi=[52,76)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 21 13:47:24 compute-0 ceph-mon[75031]: pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 32 op/s
Jan 21 13:47:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 13:47:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 21 13:47:24 compute-0 ceph-mon[75031]: osdmap e77: 3 total, 3 up, 3 in
Jan 21 13:47:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 21 13:47:24 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 21 13:47:24 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 78 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=77/78 n=1 ec=48/25 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Jan 21 13:47:25 compute-0 ceph-mon[75031]: osdmap e78: 3 total, 3 up, 3 in
Jan 21 13:47:25 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 21 13:47:25 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 21 13:47:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:26 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 21 13:47:26 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 21 13:47:26 compute-0 ceph-mon[75031]: pgmap v152: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Jan 21 13:47:26 compute-0 ceph-mon[75031]: 4.b scrub starts
Jan 21 13:47:26 compute-0 ceph-mon[75031]: 4.b scrub ok
Jan 21 13:47:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 2 objects/s recovering
Jan 21 13:47:27 compute-0 ceph-mon[75031]: 10.16 scrub starts
Jan 21 13:47:27 compute-0 ceph-mon[75031]: 10.16 scrub ok
Jan 21 13:47:28 compute-0 ceph-mon[75031]: pgmap v153: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 2 objects/s recovering
Jan 21 13:47:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 1 objects/s recovering
Jan 21 13:47:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 21 13:47:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 13:47:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 21 13:47:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 13:47:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 21 13:47:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 13:47:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 21 13:47:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 13:47:29 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 13:47:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 21 13:47:29 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 21 13:47:29 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 79 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.063592911s) [1] r=-1 lpr=79 pi=[62,79)/1 crt=38'39 active pruub 131.136032104s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:29 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 79 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.063537598s) [1] r=-1 lpr=79 pi=[62,79)/1 crt=38'39 unknown NOTIFY pruub 131.136032104s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:29 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 79 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79) [1] r=0 lpr=79 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:30 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 21 13:47:30 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 21 13:47:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 21 13:47:30 compute-0 ceph-mon[75031]: pgmap v154: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 1 objects/s recovering
Jan 21 13:47:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 13:47:30 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 21 13:47:30 compute-0 ceph-mon[75031]: osdmap e79: 3 total, 3 up, 3 in
Jan 21 13:47:30 compute-0 ceph-mon[75031]: 2.11 scrub starts
Jan 21 13:47:30 compute-0 ceph-mon[75031]: 2.11 scrub ok
Jan 21 13:47:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 21 13:47:30 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 21 13:47:30 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 80 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=79/80 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=79) [1] r=0 lpr=79 pi=[62,79)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 21 13:47:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 13:47:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 21 13:47:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 13:47:31 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 21 13:47:31 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 21 13:47:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 21 13:47:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 13:47:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 13:47:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 21 13:47:31 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 21 13:47:31 compute-0 ceph-mon[75031]: osdmap e80: 3 total, 3 up, 3 in
Jan 21 13:47:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 13:47:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 21 13:47:32 compute-0 sshd-session[98243]: Accepted publickey for zuul from 192.168.122.30 port 42800 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:47:32 compute-0 systemd-logind[780]: New session 34 of user zuul.
Jan 21 13:47:32 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 21 13:47:32 compute-0 sshd-session[98243]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:47:32 compute-0 ceph-mon[75031]: pgmap v157: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:32 compute-0 ceph-mon[75031]: 11.13 scrub starts
Jan 21 13:47:32 compute-0 ceph-mon[75031]: 11.13 scrub ok
Jan 21 13:47:32 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 13:47:32 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 21 13:47:32 compute-0 ceph-mon[75031]: osdmap e81: 3 total, 3 up, 3 in
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136120796s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=42'483 lcod 0'0 active pruub 134.710083008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136970520s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=72'486 lcod 72'486 active pruub 134.710968018s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136906624s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=72'486 lcod 72'486 unknown NOTIFY pruub 134.710968018s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 81 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81 pruub=14.136027336s) [2] r=-1 lpr=81 pi=[52,81)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 134.710083008s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:33 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81) [2] r=0 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:33 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=81) [2] r=0 lpr=81 pi=[52,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:33 compute-0 python3.9[98396]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:47:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 21 13:47:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 13:47:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 21 13:47:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 13:47:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 21 13:47:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 13:47:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 21 13:47:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 13:47:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 13:47:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 21 13:47:33 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 21 13:47:33 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 82 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82 pruub=11.302111626s) [1] r=-1 lpr=82 pi=[66,82)/1 crt=38'39 active pruub 137.414535522s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 82 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=66/67 n=1 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82 pruub=11.302055359s) [1] r=-1 lpr=82 pi=[66,82)/1 crt=38'39 unknown NOTIFY pruub 137.414535522s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:33 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:33 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[52,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82) [1] r=0 lpr=82 pi=[66,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=52/53 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=42'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:33 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 82 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] r=0 lpr=82 pi=[52,82)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:34 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 21 13:47:34 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 21 13:47:34 compute-0 sudo[98539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:47:34 compute-0 sudo[98539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:34 compute-0 sudo[98539]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:34 compute-0 sudo[98565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:47:34 compute-0 sudo[98565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 21 13:47:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 21 13:47:34 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 21 13:47:34 compute-0 ceph-mon[75031]: pgmap v159: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 13:47:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 21 13:47:34 compute-0 ceph-mon[75031]: osdmap e82: 3 total, 3 up, 3 in
Jan 21 13:47:34 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 83 pg[6.d( v 38'39 lc 36'13 (0'0,38'39] local-lis/les=82/83 n=1 ec=48/25 lis/c=66/66 les/c/f=67/67/0 sis=82) [1] r=0 lpr=82 pi=[66,82)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:34 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 83 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=82/83 n=7 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[52,82)/1 crt=42'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:34 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 83 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=82/83 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[52,82)/1 crt=72'487 lcod 72'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:34 compute-0 sudo[98662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrazhndvlqgwinrtngztvtirpjkozenm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003254.4989815-27-219057512391881/AnsiballZ_command.py'
Jan 21 13:47:34 compute-0 sudo[98662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:47:35 compute-0 python3.9[98664]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:47:35 compute-0 sudo[98565]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:47:35 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 21 13:47:35 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 21 13:47:35 compute-0 sudo[98704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:47:35 compute-0 sudo[98704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:35 compute-0 sudo[98704]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 peering, 2 unknown, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 13:47:35 compute-0 sudo[98729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:47:35 compute-0 sudo[98729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 21 13:47:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 21 13:47:35 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 21 13:47:35 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=82/83 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.987791061s) [2] async=[2] r=-1 lpr=84 pi=[52,84)/1 crt=42'483 lcod 0'0 active pruub 138.355255127s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:35 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=82/83 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.987714767s) [2] r=-1 lpr=84 pi=[52,84)/1 crt=42'483 lcod 0'0 unknown NOTIFY pruub 138.355255127s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:35 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=82/83 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.985897064s) [2] async=[2] r=-1 lpr=84 pi=[52,84)/1 crt=72'487 lcod 72'486 active pruub 138.355270386s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:35 compute-0 ceph-mon[75031]: 11.16 scrub starts
Jan 21 13:47:35 compute-0 ceph-mon[75031]: 11.16 scrub ok
Jan 21 13:47:35 compute-0 ceph-mon[75031]: osdmap e83: 3 total, 3 up, 3 in
Jan 21 13:47:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:47:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:47:35 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=82/83 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84 pruub=14.985070229s) [2] r=-1 lpr=84 pi=[52,84)/1 crt=72'487 lcod 72'486 unknown NOTIFY pruub 138.355270386s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:35 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:35 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:35 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=0/0 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:35 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 84 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:35 compute-0 podman[98767]: 2026-01-21 13:47:35.952541966 +0000 UTC m=+0.071523949 container create 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:47:35 compute-0 systemd[1]: Started libpod-conmon-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope.
Jan 21 13:47:36 compute-0 podman[98767]: 2026-01-21 13:47:35.921783687 +0000 UTC m=+0.040765720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:47:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:36 compute-0 podman[98767]: 2026-01-21 13:47:36.052152596 +0000 UTC m=+0.171134629 container init 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 21 13:47:36 compute-0 podman[98767]: 2026-01-21 13:47:36.063316435 +0000 UTC m=+0.182298428 container start 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 21 13:47:36 compute-0 podman[98767]: 2026-01-21 13:47:36.067813417 +0000 UTC m=+0.186795460 container attach 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:47:36 compute-0 cranky_proskuriakova[98783]: 167 167
Jan 21 13:47:36 compute-0 systemd[1]: libpod-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope: Deactivated successfully.
Jan 21 13:47:36 compute-0 conmon[98783]: conmon 59dd83f5b3322c59d3ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope/container/memory.events
Jan 21 13:47:36 compute-0 podman[98767]: 2026-01-21 13:47:36.072541135 +0000 UTC m=+0.191523128 container died 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 21 13:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd8a1a6118fb72bc31abeef056ec272ff6434d160161eaee5ce89212fd135eb7-merged.mount: Deactivated successfully.
Jan 21 13:47:36 compute-0 podman[98767]: 2026-01-21 13:47:36.124910285 +0000 UTC m=+0.243892278 container remove 59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_proskuriakova, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:47:36 compute-0 systemd[1]: libpod-conmon-59dd83f5b3322c59d3ab9c16ed2465a271dd4e9fc501012ec33b10e8eae89295.scope: Deactivated successfully.
Jan 21 13:47:36 compute-0 podman[98806]: 2026-01-21 13:47:36.336183557 +0000 UTC m=+0.052938655 container create 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:47:36 compute-0 systemd[1]: Started libpod-conmon-806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002.scope.
Jan 21 13:47:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:36 compute-0 podman[98806]: 2026-01-21 13:47:36.315367626 +0000 UTC m=+0.032122744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:47:36 compute-0 podman[98806]: 2026-01-21 13:47:36.417110229 +0000 UTC m=+0.133865367 container init 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:47:36 compute-0 podman[98806]: 2026-01-21 13:47:36.423775266 +0000 UTC m=+0.140530374 container start 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:47:36 compute-0 podman[98806]: 2026-01-21 13:47:36.426597506 +0000 UTC m=+0.143352664 container attach 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:47:36 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 21 13:47:36 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 21 13:47:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:36 compute-0 festive_hermann[98822]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:47:36 compute-0 festive_hermann[98822]: --> All data devices are unavailable
Jan 21 13:47:36 compute-0 systemd[1]: libpod-806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002.scope: Deactivated successfully.
Jan 21 13:47:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 21 13:47:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 21 13:47:36 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 21 13:47:36 compute-0 ceph-mon[75031]: 2.12 scrub starts
Jan 21 13:47:36 compute-0 ceph-mon[75031]: 2.12 scrub ok
Jan 21 13:47:36 compute-0 ceph-mon[75031]: pgmap v162: 305 pgs: 1 peering, 2 unknown, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 13:47:36 compute-0 ceph-mon[75031]: osdmap e84: 3 total, 3 up, 3 in
Jan 21 13:47:36 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 85 pg[9.c( v 42'483 (0'0,42'483] local-lis/les=84/85 n=7 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:36 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 85 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=82/52 les/c/f=83/53/0 sis=84) [2] r=0 lpr=84 pi=[52,84)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:36 compute-0 podman[98845]: 2026-01-21 13:47:36.957094979 +0000 UTC m=+0.026057362 container died 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81e0b83cf6675a93d214d30a84899962848cd732f182d0923597f23fbf48d2e-merged.mount: Deactivated successfully.
Jan 21 13:47:36 compute-0 podman[98845]: 2026-01-21 13:47:36.994651668 +0000 UTC m=+0.063614041 container remove 806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 13:47:37 compute-0 systemd[1]: libpod-conmon-806e2a9338108fd61401a92ef7cb3ce2ae819563f26ea82cf45c61ad39bdc002.scope: Deactivated successfully.
Jan 21 13:47:37 compute-0 sudo[98729]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:37 compute-0 sudo[98859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:47:37 compute-0 sudo[98859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:37 compute-0 sudo[98859]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:37 compute-0 sudo[98884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:47:37 compute-0 sudo[98884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.485660913 +0000 UTC m=+0.063179050 container create 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:47:37 compute-0 systemd[1]: Started libpod-conmon-1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b.scope.
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.464800631 +0000 UTC m=+0.042318768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:47:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 1 peering, 2 unknown, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.582023891 +0000 UTC m=+0.159542028 container init 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.593403727 +0000 UTC m=+0.170921854 container start 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 13:47:37 compute-0 wonderful_mccarthy[98937]: 167 167
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.599176501 +0000 UTC m=+0.176694628 container attach 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 13:47:37 compute-0 systemd[1]: libpod-1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b.scope: Deactivated successfully.
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.600688869 +0000 UTC m=+0.178206996 container died 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-426e042bc2b257357f28aa74ea654af8d69fb37396433939175c13c1bc9c2467-merged.mount: Deactivated successfully.
Jan 21 13:47:37 compute-0 podman[98921]: 2026-01-21 13:47:37.649198141 +0000 UTC m=+0.226716338 container remove 1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:47:37 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 21 13:47:37 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 21 13:47:37 compute-0 systemd[1]: libpod-conmon-1c1d754dc9bd0bc569f46b85f394457fe1aa0797253a2fa225c0ee635c07e26b.scope: Deactivated successfully.
Jan 21 13:47:37 compute-0 podman[98962]: 2026-01-21 13:47:37.893805496 +0000 UTC m=+0.060818831 container create 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:37 compute-0 systemd[1]: Started libpod-conmon-63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64.scope.
Jan 21 13:47:37 compute-0 ceph-mon[75031]: 10.18 scrub starts
Jan 21 13:47:37 compute-0 ceph-mon[75031]: 10.18 scrub ok
Jan 21 13:47:37 compute-0 ceph-mon[75031]: osdmap e85: 3 total, 3 up, 3 in
Jan 21 13:47:37 compute-0 podman[98962]: 2026-01-21 13:47:37.867608521 +0000 UTC m=+0.034621906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:47:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:38 compute-0 podman[98962]: 2026-01-21 13:47:38.014592815 +0000 UTC m=+0.181606170 container init 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:47:38 compute-0 podman[98962]: 2026-01-21 13:47:38.020613616 +0000 UTC m=+0.187626951 container start 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:47:38 compute-0 podman[98962]: 2026-01-21 13:47:38.024722409 +0000 UTC m=+0.191735794 container attach 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]: {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:     "0": [
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:         {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "devices": [
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "/dev/loop3"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             ],
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_name": "ceph_lv0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_size": "21470642176",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "name": "ceph_lv0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "tags": {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cluster_name": "ceph",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.crush_device_class": "",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.encrypted": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.objectstore": "bluestore",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osd_id": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.type": "block",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.vdo": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.with_tpm": "0"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             },
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "type": "block",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "vg_name": "ceph_vg0"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:         }
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:     ],
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:     "1": [
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:         {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "devices": [
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "/dev/loop4"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             ],
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_name": "ceph_lv1",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_size": "21470642176",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "name": "ceph_lv1",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "tags": {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cluster_name": "ceph",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.crush_device_class": "",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.encrypted": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.objectstore": "bluestore",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osd_id": "1",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.type": "block",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.vdo": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.with_tpm": "0"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             },
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "type": "block",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "vg_name": "ceph_vg1"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:         }
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:     ],
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:     "2": [
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:         {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "devices": [
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "/dev/loop5"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             ],
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_name": "ceph_lv2",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_size": "21470642176",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "name": "ceph_lv2",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "tags": {
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.cluster_name": "ceph",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.crush_device_class": "",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.encrypted": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.objectstore": "bluestore",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osd_id": "2",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.type": "block",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.vdo": "0",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:                 "ceph.with_tpm": "0"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             },
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "type": "block",
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:             "vg_name": "ceph_vg2"
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:         }
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]:     ]
Jan 21 13:47:38 compute-0 inspiring_ritchie[98978]: }
Jan 21 13:47:38 compute-0 systemd[1]: libpod-63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64.scope: Deactivated successfully.
Jan 21 13:47:38 compute-0 podman[98962]: 2026-01-21 13:47:38.321799006 +0000 UTC m=+0.488812361 container died 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 13:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9588771528aa7efbff38149d99b25fa26895e1836a3627b399de8f20bd460e4-merged.mount: Deactivated successfully.
Jan 21 13:47:38 compute-0 podman[98962]: 2026-01-21 13:47:38.377440347 +0000 UTC m=+0.544453702 container remove 63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:47:38 compute-0 systemd[1]: libpod-conmon-63d60d43261e038305e04b41c8a33cd5075d9a9632d343f261f51eca025d6d64.scope: Deactivated successfully.
Jan 21 13:47:38 compute-0 sudo[98884]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:38 compute-0 sudo[98999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:47:38 compute-0 sudo[98999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:38 compute-0 sudo[98999]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:38 compute-0 sudo[99024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:47:38 compute-0 sudo[99024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:38 compute-0 podman[99061]: 2026-01-21 13:47:38.900334619 +0000 UTC m=+0.050015872 container create 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:38 compute-0 systemd[1]: Started libpod-conmon-5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653.scope.
Jan 21 13:47:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:38 compute-0 ceph-mon[75031]: pgmap v165: 305 pgs: 1 peering, 2 unknown, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 21 13:47:38 compute-0 ceph-mon[75031]: 3.1a scrub starts
Jan 21 13:47:38 compute-0 ceph-mon[75031]: 3.1a scrub ok
Jan 21 13:47:38 compute-0 podman[99061]: 2026-01-21 13:47:38.966249417 +0000 UTC m=+0.115930690 container init 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 13:47:38 compute-0 podman[99061]: 2026-01-21 13:47:38.87759178 +0000 UTC m=+0.027273013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:47:38 compute-0 podman[99061]: 2026-01-21 13:47:38.977362324 +0000 UTC m=+0.127043557 container start 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:47:38 compute-0 happy_shamir[99075]: 167 167
Jan 21 13:47:38 compute-0 podman[99061]: 2026-01-21 13:47:38.983018996 +0000 UTC m=+0.132700259 container attach 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:47:38 compute-0 systemd[1]: libpod-5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653.scope: Deactivated successfully.
Jan 21 13:47:38 compute-0 podman[99061]: 2026-01-21 13:47:38.986089512 +0000 UTC m=+0.135770755 container died 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-41f9cbb39e4d34f80418c1a61515d93f2af4ebe74ebfe97818e330d54485bc31-merged.mount: Deactivated successfully.
Jan 21 13:47:39 compute-0 podman[99061]: 2026-01-21 13:47:39.026602505 +0000 UTC m=+0.176283718 container remove 5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shamir, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 13:47:39 compute-0 systemd[1]: libpod-conmon-5224941f0fb95c3dce0c0e7ecae36caa6e2c271a16d5ba504682dc7cab8d0653.scope: Deactivated successfully.
Jan 21 13:47:39 compute-0 podman[99100]: 2026-01-21 13:47:39.178097342 +0000 UTC m=+0.043341704 container create e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:39 compute-0 systemd[1]: Started libpod-conmon-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope.
Jan 21 13:47:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:47:39 compute-0 podman[99100]: 2026-01-21 13:47:39.162714398 +0000 UTC m=+0.027958780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:47:39 compute-0 podman[99100]: 2026-01-21 13:47:39.268883482 +0000 UTC m=+0.134127864 container init e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:47:39 compute-0 podman[99100]: 2026-01-21 13:47:39.278713778 +0000 UTC m=+0.143958140 container start e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:39 compute-0 podman[99100]: 2026-01-21 13:47:39.290583675 +0000 UTC m=+0.155828067 container attach e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 13:47:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:47:39
Jan 21 13:47:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:47:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Jan 21 13:47:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 721 B/s wr, 18 op/s; 86 B/s, 2 objects/s recovering
Jan 21 13:47:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 21 13:47:39 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 21 13:47:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 13:47:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 21 13:47:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 13:47:39 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 21 13:47:39 compute-0 lvm[99200]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:47:39 compute-0 lvm[99200]: VG ceph_vg0 finished
Jan 21 13:47:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 21 13:47:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 13:47:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 13:47:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 21 13:47:39 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 21 13:47:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 13:47:39 compute-0 ceph-mon[75031]: 7.1e scrub starts
Jan 21 13:47:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 21 13:47:39 compute-0 ceph-mon[75031]: 7.1e scrub ok
Jan 21 13:47:39 compute-0 lvm[99202]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:47:39 compute-0 lvm[99202]: VG ceph_vg1 finished
Jan 21 13:47:39 compute-0 lvm[99204]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:47:39 compute-0 lvm[99204]: VG ceph_vg2 finished
Jan 21 13:47:40 compute-0 jovial_rhodes[99117]: {}
Jan 21 13:47:40 compute-0 podman[99100]: 2026-01-21 13:47:40.099642571 +0000 UTC m=+0.964886923 container died e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:47:40 compute-0 systemd[1]: libpod-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope: Deactivated successfully.
Jan 21 13:47:40 compute-0 systemd[1]: libpod-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope: Consumed 1.254s CPU time.
Jan 21 13:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e659eaada6580e5fa0c13f38db5465a85d4268fa97f2ea29f87425e608832d6f-merged.mount: Deactivated successfully.
Jan 21 13:47:40 compute-0 podman[99100]: 2026-01-21 13:47:40.154214586 +0000 UTC m=+1.019458948 container remove e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 13:47:40 compute-0 systemd[1]: libpod-conmon-e5697401d94390440cb564cd3e0efb5251b3119ccd1fbe86339c6287ed566b7d.scope: Deactivated successfully.
Jan 21 13:47:40 compute-0 sudo[99024]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:47:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:47:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:40 compute-0 sudo[99223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:47:40 compute-0 sudo[99223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:47:40 compute-0 sudo[99223]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:40 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 21 13:47:40 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:47:40 compute-0 ceph-mon[75031]: pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 721 B/s wr, 18 op/s; 86 B/s, 2 objects/s recovering
Jan 21 13:47:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 13:47:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 21 13:47:40 compute-0 ceph-mon[75031]: osdmap e86: 3 total, 3 up, 3 in
Jan 21 13:47:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:47:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:47:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 21 13:47:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 21 13:47:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 13:47:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 21 13:47:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 13:47:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 21 13:47:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 13:47:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 13:47:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 21 13:47:41 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 21 13:47:42 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 87 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87 pruub=12.918298721s) [2] r=-1 lpr=87 pi=[62,87)/1 crt=38'39 active pruub 147.136352539s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:42 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 87 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87 pruub=12.918251991s) [2] r=-1 lpr=87 pi=[62,87)/1 crt=38'39 unknown NOTIFY pruub 147.136352539s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:42 compute-0 ceph-mon[75031]: 5.17 scrub starts
Jan 21 13:47:42 compute-0 ceph-mon[75031]: 5.17 scrub ok
Jan 21 13:47:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 13:47:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 21 13:47:42 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 87 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87) [2] r=0 lpr=87 pi=[62,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:42 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 21 13:47:42 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 21 13:47:42 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 21 13:47:42 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 21 13:47:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 21 13:47:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 21 13:47:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 21 13:47:43 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 88 pg[6.f( v 38'39 lc 36'1 (0'0,38'39] local-lis/les=87/88 n=1 ec=48/25 lis/c=62/62 les/c/f=63/63/0 sis=87) [2] r=0 lpr=87 pi=[62,87)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:43 compute-0 ceph-mon[75031]: pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 21 13:47:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 13:47:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 21 13:47:43 compute-0 ceph-mon[75031]: osdmap e87: 3 total, 3 up, 3 in
Jan 21 13:47:43 compute-0 ceph-mon[75031]: 5.14 scrub starts
Jan 21 13:47:43 compute-0 ceph-mon[75031]: 5.14 scrub ok
Jan 21 13:47:43 compute-0 sudo[98662]: pam_unix(sudo:session): session closed for user root
Jan 21 13:47:43 compute-0 sshd-session[98246]: Connection closed by 192.168.122.30 port 42800
Jan 21 13:47:43 compute-0 sshd-session[98243]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:47:43 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 21 13:47:43 compute-0 systemd[1]: session-34.scope: Consumed 8.636s CPU time.
Jan 21 13:47:43 compute-0 systemd-logind[780]: Session 34 logged out. Waiting for processes to exit.
Jan 21 13:47:43 compute-0 systemd-logind[780]: Removed session 34.
Jan 21 13:47:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 21 13:47:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 21 13:47:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 21 13:47:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 21 13:47:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 13:47:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 21 13:47:44 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 21 13:47:44 compute-0 ceph-mon[75031]: 2.10 scrub starts
Jan 21 13:47:44 compute-0 ceph-mon[75031]: 2.10 scrub ok
Jan 21 13:47:44 compute-0 ceph-mon[75031]: osdmap e88: 3 total, 3 up, 3 in
Jan 21 13:47:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 21 13:47:45 compute-0 ceph-mon[75031]: pgmap v171: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 21 13:47:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 21 13:47:45 compute-0 ceph-mon[75031]: osdmap e89: 3 total, 3 up, 3 in
Jan 21 13:47:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 0 objects/s recovering
Jan 21 13:47:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 21 13:47:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 21 13:47:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 21 13:47:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 21 13:47:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 13:47:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 21 13:47:46 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 21 13:47:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:47 compute-0 ceph-mon[75031]: pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 0 objects/s recovering
Jan 21 13:47:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 21 13:47:47 compute-0 ceph-mon[75031]: osdmap e90: 3 total, 3 up, 3 in
Jan 21 13:47:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 objects/s recovering
Jan 21 13:47:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 21 13:47:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 21 13:47:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 21 13:47:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 13:47:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 21 13:47:48 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 21 13:47:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 21 13:47:48 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 21 13:47:48 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 21 13:47:49 compute-0 ceph-mon[75031]: pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 objects/s recovering
Jan 21 13:47:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 21 13:47:49 compute-0 ceph-mon[75031]: osdmap e91: 3 total, 3 up, 3 in
Jan 21 13:47:49 compute-0 ceph-mon[75031]: 3.19 scrub starts
Jan 21 13:47:49 compute-0 ceph-mon[75031]: 3.19 scrub ok
Jan 21 13:47:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 21 13:47:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 21 13:47:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 21 13:47:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 21 13:47:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 13:47:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 21 13:47:50 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 21 13:47:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2408416314835177e-06 of space, bias 4.0, pg target 0.0014890099577802214 quantized to 16 (current 16)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:47:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:47:50 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 92 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=92 pruub=14.150473595s) [2] r=-1 lpr=92 pi=[60,92)/1 crt=69'484 lcod 69'484 active pruub 157.065185547s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:50 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 92 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=92 pruub=14.150398254s) [2] r=-1 lpr=92 pi=[60,92)/1 crt=69'484 lcod 69'484 unknown NOTIFY pruub 157.065185547s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:50 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=92) [2] r=0 lpr=92 pi=[60,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 21 13:47:51 compute-0 ceph-mon[75031]: pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 21 13:47:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 21 13:47:51 compute-0 ceph-mon[75031]: osdmap e92: 3 total, 3 up, 3 in
Jan 21 13:47:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 21 13:47:51 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 21 13:47:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[60,93)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:51 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[60,93)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:51 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 93 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=0 lpr=93 pi=[60,93)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:51 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 93 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] r=0 lpr=93 pi=[60,93)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 21 13:47:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 21 13:47:51 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 21 13:47:51 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 21 13:47:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 21 13:47:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 21 13:47:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 21 13:47:52 compute-0 ceph-mon[75031]: osdmap e93: 3 total, 3 up, 3 in
Jan 21 13:47:52 compute-0 ceph-mon[75031]: 5.8 scrub starts
Jan 21 13:47:52 compute-0 ceph-mon[75031]: 5.8 scrub ok
Jan 21 13:47:52 compute-0 ceph-mon[75031]: 5.15 scrub starts
Jan 21 13:47:52 compute-0 ceph-mon[75031]: 5.15 scrub ok
Jan 21 13:47:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 21 13:47:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 13:47:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 21 13:47:52 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 21 13:47:52 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 94 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=93/94 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[60,93)/1 crt=72'485 lcod 69'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:52 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 21 13:47:52 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 21 13:47:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 21 13:47:53 compute-0 ceph-mon[75031]: pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 21 13:47:53 compute-0 ceph-mon[75031]: osdmap e94: 3 total, 3 up, 3 in
Jan 21 13:47:53 compute-0 ceph-mon[75031]: 7.1d scrub starts
Jan 21 13:47:53 compute-0 ceph-mon[75031]: 7.1d scrub ok
Jan 21 13:47:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 21 13:47:53 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 21 13:47:53 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=93/94 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95 pruub=15.251998901s) [2] async=[2] r=-1 lpr=95 pi=[60,95)/1 crt=72'485 lcod 69'484 active pruub 160.677017212s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:53 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=93/94 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95 pruub=15.251936913s) [2] r=-1 lpr=95 pi=[60,95)/1 crt=72'485 lcod 69'484 unknown NOTIFY pruub 160.677017212s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:53 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95) [2] r=0 lpr=95 pi=[60,95)/1 pct=0'0 crt=72'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:53 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 95 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95) [2] r=0 lpr=95 pi=[60,95)/1 crt=72'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:53 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 21 13:47:53 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 21 13:47:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 21 13:47:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 21 13:47:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 21 13:47:54 compute-0 ceph-mon[75031]: osdmap e95: 3 total, 3 up, 3 in
Jan 21 13:47:54 compute-0 ceph-mon[75031]: 2.e scrub starts
Jan 21 13:47:54 compute-0 ceph-mon[75031]: 2.e scrub ok
Jan 21 13:47:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 21 13:47:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 13:47:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 21 13:47:54 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 21 13:47:54 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 96 pg[9.13( v 72'485 (0'0,72'485] local-lis/les=95/96 n=6 ec=52/36 lis/c=93/60 les/c/f=94/61/0 sis=95) [2] r=0 lpr=95 pi=[60,95)/1 crt=72'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:54 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 96 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=96 pruub=9.587777138s) [1] r=-1 lpr=96 pi=[59,96)/1 crt=42'483 active pruub 156.169006348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:54 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 96 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=96 pruub=9.587731361s) [1] r=-1 lpr=96 pi=[59,96)/1 crt=42'483 unknown NOTIFY pruub 156.169006348s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:54 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=96) [1] r=0 lpr=96 pi=[59,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:54 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 21 13:47:54 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 21 13:47:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 21 13:47:55 compute-0 ceph-mon[75031]: pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 21 13:47:55 compute-0 ceph-mon[75031]: osdmap e96: 3 total, 3 up, 3 in
Jan 21 13:47:55 compute-0 ceph-mon[75031]: 2.13 scrub starts
Jan 21 13:47:55 compute-0 ceph-mon[75031]: 2.13 scrub ok
Jan 21 13:47:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 21 13:47:55 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 21 13:47:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[59,97)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:55 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[59,97)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 97 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=0 lpr=97 pi=[59,97)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:55 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 97 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=59/60 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] r=0 lpr=97 pi=[59,97)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Jan 21 13:47:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 21 13:47:56 compute-0 ceph-mon[75031]: osdmap e97: 3 total, 3 up, 3 in
Jan 21 13:47:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 21 13:47:56 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 21 13:47:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 98 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=97/98 n=6 ec=52/36 lis/c=59/59 les/c/f=60/60/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[59,97)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 21 13:47:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 21 13:47:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:47:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 21 13:47:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 21 13:47:56 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 21 13:47:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=97/98 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99 pruub=15.705821991s) [1] async=[1] r=-1 lpr=99 pi=[59,99)/1 crt=42'483 active pruub 164.488800049s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:56 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=97/98 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99 pruub=15.705730438s) [1] r=-1 lpr=99 pi=[59,99)/1 crt=42'483 unknown NOTIFY pruub 164.488800049s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99) [1] r=0 lpr=99 pi=[59,99)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:56 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 99 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99) [1] r=0 lpr=99 pi=[59,99)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:57 compute-0 ceph-mon[75031]: pgmap v186: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Jan 21 13:47:57 compute-0 ceph-mon[75031]: osdmap e98: 3 total, 3 up, 3 in
Jan 21 13:47:57 compute-0 ceph-mon[75031]: 8.13 scrub starts
Jan 21 13:47:57 compute-0 ceph-mon[75031]: 8.13 scrub ok
Jan 21 13:47:57 compute-0 ceph-mon[75031]: osdmap e99: 3 total, 3 up, 3 in
Jan 21 13:47:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 21 13:47:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 21 13:47:57 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 21 13:47:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:57 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 100 pg[9.15( v 42'483 (0'0,42'483] local-lis/les=99/100 n=6 ec=52/36 lis/c=97/59 les/c/f=98/60/0 sis=99) [1] r=0 lpr=99 pi=[59,99)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:47:58 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 21 13:47:58 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 21 13:47:58 compute-0 ceph-mon[75031]: osdmap e100: 3 total, 3 up, 3 in
Jan 21 13:47:58 compute-0 ceph-mon[75031]: pgmap v190: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:47:59 compute-0 sshd-session[99281]: Accepted publickey for zuul from 192.168.122.30 port 38186 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:47:59 compute-0 systemd-logind[780]: New session 35 of user zuul.
Jan 21 13:47:59 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 21 13:47:59 compute-0 sshd-session[99281]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:47:59 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 21 13:47:59 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 21 13:47:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 473 B/s wr, 10 op/s; 50 B/s, 1 objects/s recovering
Jan 21 13:47:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 21 13:47:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 21 13:47:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 21 13:47:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 13:47:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 21 13:47:59 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 101 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=12.996678352s) [0] r=-1 lpr=101 pi=[70,101)/1 crt=42'483 active pruub 155.522689819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:47:59 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 101 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=12.996642113s) [0] r=-1 lpr=101 pi=[70,101)/1 crt=42'483 unknown NOTIFY pruub 155.522689819s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:47:59 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=101) [0] r=0 lpr=101 pi=[70,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:47:59 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 21 13:47:59 compute-0 ceph-mon[75031]: 7.7 scrub starts
Jan 21 13:47:59 compute-0 ceph-mon[75031]: 7.7 scrub ok
Jan 21 13:47:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 21 13:47:59 compute-0 python3.9[99434]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 21 13:48:00 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 21 13:48:00 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 21 13:48:00 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 21 13:48:00 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 21 13:48:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 21 13:48:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 21 13:48:00 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 21 13:48:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:00 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:00 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 102 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=0 lpr=102 pi=[70,102)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:00 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 102 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=0 lpr=102 pi=[70,102)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:00 compute-0 ceph-mon[75031]: 10.5 scrub starts
Jan 21 13:48:00 compute-0 ceph-mon[75031]: 10.5 scrub ok
Jan 21 13:48:00 compute-0 ceph-mon[75031]: pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 473 B/s wr, 10 op/s; 50 B/s, 1 objects/s recovering
Jan 21 13:48:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 21 13:48:00 compute-0 ceph-mon[75031]: osdmap e101: 3 total, 3 up, 3 in
Jan 21 13:48:00 compute-0 ceph-mon[75031]: osdmap e102: 3 total, 3 up, 3 in
Jan 21 13:48:01 compute-0 python3.9[99608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:48:01 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 21 13:48:01 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 21 13:48:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 408 B/s wr, 8 op/s; 43 B/s, 1 objects/s recovering
Jan 21 13:48:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 21 13:48:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 21 13:48:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 21 13:48:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 13:48:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 21 13:48:01 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 21 13:48:01 compute-0 ceph-mon[75031]: 8.8 scrub starts
Jan 21 13:48:01 compute-0 ceph-mon[75031]: 8.8 scrub ok
Jan 21 13:48:01 compute-0 ceph-mon[75031]: 5.a scrub starts
Jan 21 13:48:01 compute-0 ceph-mon[75031]: 5.a scrub ok
Jan 21 13:48:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 21 13:48:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 21 13:48:01 compute-0 ceph-mon[75031]: osdmap e103: 3 total, 3 up, 3 in
Jan 21 13:48:01 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 103 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=102/103 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] async=[0] r=0 lpr=102 pi=[70,102)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:02 compute-0 sudo[99762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-visnjaxnmgzhrdxvnocyemfybknhnsri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003281.582759-40-118670619203862/AnsiballZ_command.py'
Jan 21 13:48:02 compute-0 sudo[99762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:02 compute-0 python3.9[99764]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:48:02 compute-0 sudo[99762]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:02 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 21 13:48:02 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 21 13:48:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 21 13:48:02 compute-0 ceph-mon[75031]: 2.c scrub starts
Jan 21 13:48:02 compute-0 ceph-mon[75031]: 2.c scrub ok
Jan 21 13:48:02 compute-0 ceph-mon[75031]: pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 408 B/s wr, 8 op/s; 43 B/s, 1 objects/s recovering
Jan 21 13:48:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 21 13:48:02 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 21 13:48:02 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=102/103 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.991653442s) [0] async=[0] r=-1 lpr=104 pi=[70,104)/1 crt=42'483 active pruub 160.563919067s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:02 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=102/103 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.991371155s) [0] r=-1 lpr=104 pi=[70,104)/1 crt=42'483 unknown NOTIFY pruub 160.563919067s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:02 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:02 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 104 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:02 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 21 13:48:02 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 21 13:48:02 compute-0 sudo[99915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nplkcyokddvaukwkcalngugsxwwmyhba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003282.542801-52-43783971902191/AnsiballZ_stat.py'
Jan 21 13:48:03 compute-0 sudo[99915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:03 compute-0 python3.9[99917]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:48:03 compute-0 sudo[99915]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 21 13:48:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 21 13:48:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 21 13:48:03 compute-0 ceph-mon[75031]: 8.a scrub starts
Jan 21 13:48:03 compute-0 ceph-mon[75031]: 8.a scrub ok
Jan 21 13:48:03 compute-0 ceph-mon[75031]: osdmap e104: 3 total, 3 up, 3 in
Jan 21 13:48:03 compute-0 ceph-mon[75031]: 10.1e scrub starts
Jan 21 13:48:03 compute-0 ceph-mon[75031]: 10.1e scrub ok
Jan 21 13:48:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 21 13:48:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 13:48:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 21 13:48:03 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 21 13:48:03 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 105 pg[9.16( v 42'483 (0'0,42'483] local-lis/les=104/105 n=6 ec=52/36 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:04 compute-0 sudo[100069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jasmkdbwmsxguvctiuyvzgfpapbqiafh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003283.5587912-63-118307934201549/AnsiballZ_file.py'
Jan 21 13:48:04 compute-0 sudo[100069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:04 compute-0 python3.9[100071]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:48:04 compute-0 sudo[100069]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:04 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 21 13:48:04 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 21 13:48:04 compute-0 ceph-mon[75031]: pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 21 13:48:04 compute-0 ceph-mon[75031]: osdmap e105: 3 total, 3 up, 3 in
Jan 21 13:48:04 compute-0 sudo[100221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqlhikwutyenmmtuzgqtsjqlxxitwlxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003284.5206895-72-147594153616978/AnsiballZ_file.py'
Jan 21 13:48:04 compute-0 sudo[100221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:05 compute-0 python3.9[100223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:48:05 compute-0 sudo[100221]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 13:48:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 21 13:48:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 21 13:48:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 21 13:48:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 13:48:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 21 13:48:05 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 21 13:48:05 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 106 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=106 pruub=15.186129570s) [2] r=-1 lpr=106 pi=[60,106)/1 crt=72'486 lcod 72'486 active pruub 173.065551758s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:05 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 106 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=106 pruub=15.186038971s) [2] r=-1 lpr=106 pi=[60,106)/1 crt=72'486 lcod 72'486 unknown NOTIFY pruub 173.065551758s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:05 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=106) [2] r=0 lpr=106 pi=[60,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:05 compute-0 ceph-mon[75031]: 5.b scrub starts
Jan 21 13:48:05 compute-0 ceph-mon[75031]: 5.b scrub ok
Jan 21 13:48:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 21 13:48:05 compute-0 python3.9[100373]: ansible-ansible.builtin.service_facts Invoked
Jan 21 13:48:05 compute-0 network[100390]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 13:48:05 compute-0 network[100391]: 'network-scripts' will be removed from distribution in near future.
Jan 21 13:48:05 compute-0 network[100392]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 13:48:06 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 21 13:48:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 21 13:48:06 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 21 13:48:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 21 13:48:06 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 21 13:48:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 107 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=0 lpr=107 pi=[60,107)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:06 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 107 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=60/61 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=0 lpr=107 pi=[60,107)/1 crt=72'486 lcod 72'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:06 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[60,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:06 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[60,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:06 compute-0 ceph-mon[75031]: pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 13:48:06 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 21 13:48:06 compute-0 ceph-mon[75031]: osdmap e106: 3 total, 3 up, 3 in
Jan 21 13:48:07 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 21 13:48:07 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 21 13:48:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 13:48:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 21 13:48:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 21 13:48:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 21 13:48:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 13:48:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 21 13:48:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 21 13:48:07 compute-0 ceph-mon[75031]: 11.0 scrub starts
Jan 21 13:48:07 compute-0 ceph-mon[75031]: 11.0 scrub ok
Jan 21 13:48:07 compute-0 ceph-mon[75031]: osdmap e107: 3 total, 3 up, 3 in
Jan 21 13:48:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 21 13:48:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 21 13:48:07 compute-0 ceph-mon[75031]: osdmap e108: 3 total, 3 up, 3 in
Jan 21 13:48:07 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 108 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=107/108 n=6 ec=52/36 lis/c=60/60 les/c/f=61/61/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[60,107)/1 crt=72'487 lcod 72'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:08 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 21 13:48:08 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 21 13:48:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 21 13:48:08 compute-0 ceph-mon[75031]: 10.3 scrub starts
Jan 21 13:48:08 compute-0 ceph-mon[75031]: 10.3 scrub ok
Jan 21 13:48:08 compute-0 ceph-mon[75031]: pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 21 13:48:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 21 13:48:08 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 21 13:48:08 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=107/108 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109 pruub=14.968790054s) [2] async=[2] r=-1 lpr=109 pi=[60,109)/1 crt=72'487 lcod 72'486 active pruub 175.924285889s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:08 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=107/108 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109 pruub=14.968612671s) [2] r=-1 lpr=109 pi=[60,109)/1 crt=72'487 lcod 72'486 unknown NOTIFY pruub 175.924285889s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:08 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109) [2] r=0 lpr=109 pi=[60,109)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:08 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 109 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109) [2] r=0 lpr=109 pi=[60,109)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 21 13:48:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 21 13:48:09 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 21 13:48:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 21 13:48:09 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 21 13:48:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 13:48:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 21 13:48:09 compute-0 ceph-mon[75031]: 5.3 scrub starts
Jan 21 13:48:09 compute-0 ceph-mon[75031]: 5.3 scrub ok
Jan 21 13:48:09 compute-0 ceph-mon[75031]: osdmap e109: 3 total, 3 up, 3 in
Jan 21 13:48:09 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 21 13:48:09 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 21 13:48:09 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 110 pg[9.19( v 72'487 (0'0,72'487] local-lis/les=109/110 n=6 ec=52/36 lis/c=107/60 les/c/f=108/61/0 sis=109) [2] r=0 lpr=109 pi=[60,109)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:09 compute-0 python3.9[100652]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:48:10 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 21 13:48:10 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 21 13:48:10 compute-0 python3.9[100802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:48:10 compute-0 ceph-mon[75031]: pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:10 compute-0 ceph-mon[75031]: 10.1 scrub starts
Jan 21 13:48:10 compute-0 ceph-mon[75031]: 10.1 scrub ok
Jan 21 13:48:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 21 13:48:10 compute-0 ceph-mon[75031]: osdmap e110: 3 total, 3 up, 3 in
Jan 21 13:48:10 compute-0 ceph-mon[75031]: 8.3 scrub starts
Jan 21 13:48:10 compute-0 ceph-mon[75031]: 8.3 scrub ok
Jan 21 13:48:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:48:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:48:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:48:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:48:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:48:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:48:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 21 13:48:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 21 13:48:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 21 13:48:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 21 13:48:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 13:48:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 21 13:48:11 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 21 13:48:11 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 111 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=111 pruub=13.138989449s) [0] r=-1 lpr=111 pi=[84,111)/1 crt=72'487 active pruub 167.878814697s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:11 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 111 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=111 pruub=13.138909340s) [0] r=-1 lpr=111 pi=[84,111)/1 crt=72'487 unknown NOTIFY pruub 167.878814697s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:11 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=111) [0] r=0 lpr=111 pi=[84,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:11 compute-0 python3.9[100956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:48:12 compute-0 sudo[101112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scoyptktmdijfuhfbphhghduceefghwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003292.273541-120-219107633794360/AnsiballZ_setup.py'
Jan 21 13:48:12 compute-0 sudo[101112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 21 13:48:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 21 13:48:12 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 21 13:48:12 compute-0 ceph-mon[75031]: pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 21 13:48:12 compute-0 ceph-mon[75031]: osdmap e111: 3 total, 3 up, 3 in
Jan 21 13:48:12 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[84,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:12 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[84,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:12 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 112 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=0 lpr=112 pi=[84,112)/1 crt=72'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:12 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 112 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=84/85 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] r=0 lpr=112 pi=[84,112)/1 crt=72'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:12 compute-0 python3.9[101114]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:48:13 compute-0 sudo[101112]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:13 compute-0 sudo[101196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axclsuiypnkshogbfibnqwdgolzyzlse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003292.273541-120-219107633794360/AnsiballZ_dnf.py'
Jan 21 13:48:13 compute-0 sudo[101196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 21 13:48:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 21 13:48:13 compute-0 python3.9[101198]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:48:13 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 21 13:48:13 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 21 13:48:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 21 13:48:13 compute-0 ceph-mon[75031]: osdmap e112: 3 total, 3 up, 3 in
Jan 21 13:48:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 21 13:48:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 13:48:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 21 13:48:13 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 21 13:48:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 113 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=112/113 n=6 ec=52/36 lis/c=84/84 les/c/f=85/85/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[84,112)/1 crt=72'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:14 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 21 13:48:14 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 21 13:48:14 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 21 13:48:14 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 21 13:48:14 compute-0 ceph-mon[75031]: pgmap v210: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:14 compute-0 ceph-mon[75031]: 10.17 scrub starts
Jan 21 13:48:14 compute-0 ceph-mon[75031]: 10.17 scrub ok
Jan 21 13:48:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 21 13:48:14 compute-0 ceph-mon[75031]: osdmap e113: 3 total, 3 up, 3 in
Jan 21 13:48:14 compute-0 ceph-mon[75031]: 8.1 scrub starts
Jan 21 13:48:14 compute-0 ceph-mon[75031]: 8.1 scrub ok
Jan 21 13:48:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 21 13:48:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 21 13:48:14 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 21 13:48:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114) [0] r=0 lpr=114 pi=[84,114)/1 pct=0'0 crt=72'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:14 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=0/0 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114) [0] r=0 lpr=114 pi=[84,114)/1 crt=72'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=112/113 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114 pruub=15.188015938s) [0] async=[0] r=-1 lpr=114 pi=[84,114)/1 crt=72'487 active pruub 172.965347290s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:14 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 114 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=112/113 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114 pruub=15.187899590s) [0] r=-1 lpr=114 pi=[84,114)/1 crt=72'487 unknown NOTIFY pruub 172.965347290s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:15 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 21 13:48:15 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 21 13:48:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Jan 21 13:48:15 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 21 13:48:15 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 21 13:48:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 21 13:48:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 21 13:48:15 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 21 13:48:15 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 115 pg[9.1c( v 72'487 (0'0,72'487] local-lis/les=114/115 n=6 ec=52/36 lis/c=112/84 les/c/f=113/85/0 sis=114) [0] r=0 lpr=114 pi=[84,114)/1 crt=72'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:15 compute-0 ceph-mon[75031]: 2.0 scrub starts
Jan 21 13:48:15 compute-0 ceph-mon[75031]: 2.0 scrub ok
Jan 21 13:48:15 compute-0 ceph-mon[75031]: osdmap e114: 3 total, 3 up, 3 in
Jan 21 13:48:15 compute-0 ceph-mon[75031]: 8.0 scrub starts
Jan 21 13:48:15 compute-0 ceph-mon[75031]: 8.0 scrub ok
Jan 21 13:48:15 compute-0 ceph-mon[75031]: 2.16 scrub starts
Jan 21 13:48:15 compute-0 ceph-mon[75031]: 2.16 scrub ok
Jan 21 13:48:16 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 21 13:48:16 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 21 13:48:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:16 compute-0 ceph-mon[75031]: pgmap v213: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Jan 21 13:48:16 compute-0 ceph-mon[75031]: osdmap e115: 3 total, 3 up, 3 in
Jan 21 13:48:16 compute-0 ceph-mon[75031]: 3.b scrub starts
Jan 21 13:48:16 compute-0 ceph-mon[75031]: 3.b scrub ok
Jan 21 13:48:17 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 21 13:48:17 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 21 13:48:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 21 13:48:17 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 21 13:48:17 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 21 13:48:17 compute-0 ceph-mon[75031]: 11.c scrub starts
Jan 21 13:48:17 compute-0 ceph-mon[75031]: 11.c scrub ok
Jan 21 13:48:18 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 21 13:48:18 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 21 13:48:18 compute-0 ceph-mon[75031]: pgmap v215: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 21 13:48:18 compute-0 ceph-mon[75031]: 5.0 scrub starts
Jan 21 13:48:18 compute-0 ceph-mon[75031]: 5.0 scrub ok
Jan 21 13:48:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 2 objects/s recovering
Jan 21 13:48:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 21 13:48:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 21 13:48:19 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 21 13:48:19 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 21 13:48:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 21 13:48:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 13:48:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 21 13:48:19 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 21 13:48:19 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 116 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=116 pruub=8.702566147s) [0] r=-1 lpr=116 pi=[70,116)/1 crt=69'484 lcod 69'484 active pruub 171.524093628s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:19 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 116 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=116 pruub=8.702425003s) [0] r=-1 lpr=116 pi=[70,116)/1 crt=69'484 lcod 69'484 unknown NOTIFY pruub 171.524093628s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:19 compute-0 ceph-mon[75031]: 10.0 scrub starts
Jan 21 13:48:19 compute-0 ceph-mon[75031]: 10.0 scrub ok
Jan 21 13:48:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 21 13:48:19 compute-0 ceph-mon[75031]: 5.2 scrub starts
Jan 21 13:48:19 compute-0 ceph-mon[75031]: 5.2 scrub ok
Jan 21 13:48:19 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=116) [0] r=0 lpr=116 pi=[70,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:20 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 21 13:48:20 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 21 13:48:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 21 13:48:20 compute-0 ceph-mon[75031]: pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 138 B/s, 2 objects/s recovering
Jan 21 13:48:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 21 13:48:20 compute-0 ceph-mon[75031]: osdmap e116: 3 total, 3 up, 3 in
Jan 21 13:48:20 compute-0 ceph-mon[75031]: 3.4 scrub starts
Jan 21 13:48:20 compute-0 ceph-mon[75031]: 3.4 scrub ok
Jan 21 13:48:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 21 13:48:20 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[70,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:20 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[70,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:20 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 21 13:48:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 117 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=0 lpr=117 pi=[70,117)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:20 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 117 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=70/71 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] r=0 lpr=117 pi=[70,117)/1 crt=69'484 lcod 69'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 21 13:48:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 21 13:48:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:48:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:21 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 21 13:48:21 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 21 13:48:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 21 13:48:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:48:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 21 13:48:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 21 13:48:21 compute-0 ceph-mon[75031]: osdmap e117: 3 total, 3 up, 3 in
Jan 21 13:48:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 21 13:48:21 compute-0 ceph-mon[75031]: 2.1f scrub starts
Jan 21 13:48:21 compute-0 ceph-mon[75031]: 2.1f scrub ok
Jan 21 13:48:21 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 118 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=118 pruub=15.664496422s) [1] r=-1 lpr=118 pi=[71,118)/1 crt=42'483 active pruub 180.526473999s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:21 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 118 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=118 pruub=15.664244652s) [1] r=-1 lpr=118 pi=[71,118)/1 crt=42'483 unknown NOTIFY pruub 180.526473999s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:21 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 118 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=117/118 n=6 ec=52/36 lis/c=70/70 les/c/f=71/71/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[70,117)/1 crt=72'485 lcod 69'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:21 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 21 13:48:22 compute-0 ceph-mon[75031]: pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Jan 21 13:48:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 21 13:48:22 compute-0 ceph-mon[75031]: osdmap e118: 3 total, 3 up, 3 in
Jan 21 13:48:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 21 13:48:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 21 13:48:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[71,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:22 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[71,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=0 lpr=119 pi=[71,119)/1 crt=42'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] r=0 lpr=119 pi=[71,119)/1 crt=42'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=117/118 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119 pruub=14.974673271s) [0] async=[0] r=-1 lpr=119 pi=[70,119)/1 crt=72'485 lcod 69'484 active pruub 180.866104126s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:22 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=117/118 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119 pruub=14.974523544s) [0] r=-1 lpr=119 pi=[70,119)/1 crt=72'485 lcod 69'484 unknown NOTIFY pruub 180.866104126s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:22 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119) [0] r=0 lpr=119 pi=[70,119)/1 pct=0'0 crt=72'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:22 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 119 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=0/0 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119) [0] r=0 lpr=119 pi=[70,119)/1 crt=72'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:23 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 21 13:48:23 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 21 13:48:23 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 21 13:48:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:23 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 21 13:48:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 21 13:48:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 21 13:48:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 21 13:48:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 21 13:48:23 compute-0 ceph-mon[75031]: osdmap e119: 3 total, 3 up, 3 in
Jan 21 13:48:23 compute-0 ceph-mon[75031]: 7.0 scrub starts
Jan 21 13:48:23 compute-0 ceph-mon[75031]: 7.0 scrub ok
Jan 21 13:48:23 compute-0 ceph-mon[75031]: 5.5 scrub starts
Jan 21 13:48:23 compute-0 ceph-mon[75031]: 5.5 scrub ok
Jan 21 13:48:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 21 13:48:23 compute-0 ceph-osd[85740]: osd.0 pg_epoch: 120 pg[9.1e( v 72'485 (0'0,72'485] local-lis/les=119/120 n=6 ec=52/36 lis/c=117/70 les/c/f=118/71/0 sis=119) [0] r=0 lpr=119 pi=[70,119)/1 crt=72'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:24 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 120 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=119/120 n=6 ec=52/36 lis/c=71/71 les/c/f=72/72/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[71,119)/1 crt=42'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 21 13:48:24 compute-0 ceph-mon[75031]: 2.1 scrub starts
Jan 21 13:48:24 compute-0 ceph-mon[75031]: 2.1 scrub ok
Jan 21 13:48:24 compute-0 ceph-mon[75031]: pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:24 compute-0 ceph-mon[75031]: osdmap e120: 3 total, 3 up, 3 in
Jan 21 13:48:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 21 13:48:24 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 21 13:48:24 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 pct=0'0 crt=42'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:24 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=0/0 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 crt=42'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 21 13:48:24 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=119/120 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121 pruub=15.554893494s) [1] async=[1] r=-1 lpr=121 pi=[71,121)/1 crt=42'483 active pruub 183.464660645s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 21 13:48:24 compute-0 ceph-osd[87843]: osd.2 pg_epoch: 121 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=119/120 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121 pruub=15.554841042s) [1] r=-1 lpr=121 pi=[71,121)/1 crt=42'483 unknown NOTIFY pruub 183.464660645s@ mbc={}] state<Start>: transitioning to Stray
Jan 21 13:48:25 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 21 13:48:25 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 21 13:48:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 21 13:48:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 21 13:48:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 21 13:48:25 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 21 13:48:25 compute-0 ceph-mon[75031]: osdmap e121: 3 total, 3 up, 3 in
Jan 21 13:48:25 compute-0 ceph-osd[86795]: osd.1 pg_epoch: 122 pg[9.1f( v 42'483 (0'0,42'483] local-lis/les=121/122 n=6 ec=52/36 lis/c=119/71 les/c/f=120/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 crt=42'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 21 13:48:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:26 compute-0 ceph-mon[75031]: 5.6 scrub starts
Jan 21 13:48:26 compute-0 ceph-mon[75031]: 5.6 scrub ok
Jan 21 13:48:26 compute-0 ceph-mon[75031]: pgmap v225: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 21 13:48:26 compute-0 ceph-mon[75031]: osdmap e122: 3 total, 3 up, 3 in
Jan 21 13:48:27 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 21 13:48:27 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 21 13:48:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Jan 21 13:48:28 compute-0 ceph-mon[75031]: 10.c scrub starts
Jan 21 13:48:28 compute-0 ceph-mon[75031]: 10.c scrub ok
Jan 21 13:48:28 compute-0 ceph-mon[75031]: pgmap v227: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Jan 21 13:48:29 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 21 13:48:29 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 21 13:48:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Jan 21 13:48:29 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 21 13:48:29 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 21 13:48:30 compute-0 ceph-mon[75031]: 3.0 scrub starts
Jan 21 13:48:30 compute-0 ceph-mon[75031]: 3.0 scrub ok
Jan 21 13:48:31 compute-0 ceph-mon[75031]: 10.a scrub starts
Jan 21 13:48:31 compute-0 ceph-mon[75031]: 10.a scrub ok
Jan 21 13:48:31 compute-0 ceph-mon[75031]: pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Jan 21 13:48:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 1 objects/s recovering
Jan 21 13:48:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:32 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 21 13:48:32 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 21 13:48:32 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 21 13:48:32 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 21 13:48:33 compute-0 ceph-mon[75031]: pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 1 objects/s recovering
Jan 21 13:48:33 compute-0 ceph-mon[75031]: 2.2 scrub starts
Jan 21 13:48:33 compute-0 ceph-mon[75031]: 2.2 scrub ok
Jan 21 13:48:33 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 21 13:48:33 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 21 13:48:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 21 13:48:33 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 21 13:48:33 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 21 13:48:34 compute-0 ceph-mon[75031]: 5.e scrub starts
Jan 21 13:48:34 compute-0 ceph-mon[75031]: 5.e scrub ok
Jan 21 13:48:34 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 21 13:48:34 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 21 13:48:35 compute-0 ceph-mon[75031]: 5.d scrub starts
Jan 21 13:48:35 compute-0 ceph-mon[75031]: 5.d scrub ok
Jan 21 13:48:35 compute-0 ceph-mon[75031]: pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 21 13:48:35 compute-0 ceph-mon[75031]: 11.a scrub starts
Jan 21 13:48:35 compute-0 ceph-mon[75031]: 11.a scrub ok
Jan 21 13:48:35 compute-0 ceph-mon[75031]: 3.2 scrub starts
Jan 21 13:48:35 compute-0 ceph-mon[75031]: 3.2 scrub ok
Jan 21 13:48:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 21 13:48:36 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 21 13:48:36 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 21 13:48:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:37 compute-0 ceph-mon[75031]: pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 21 13:48:37 compute-0 ceph-mon[75031]: 7.d scrub starts
Jan 21 13:48:37 compute-0 ceph-mon[75031]: 7.d scrub ok
Jan 21 13:48:37 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 21 13:48:37 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 21 13:48:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 21 13:48:38 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 21 13:48:38 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 21 13:48:38 compute-0 ceph-mon[75031]: 5.1c scrub starts
Jan 21 13:48:38 compute-0 ceph-mon[75031]: 5.1c scrub ok
Jan 21 13:48:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:48:39
Jan 21 13:48:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:48:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:48:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.mgr', 'volumes', 'vms', '.rgw.root', 'default.rgw.control']
Jan 21 13:48:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:48:39 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 21 13:48:39 compute-0 ceph-mon[75031]: pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 21 13:48:39 compute-0 ceph-mon[75031]: 8.7 scrub starts
Jan 21 13:48:39 compute-0 ceph-mon[75031]: 8.7 scrub ok
Jan 21 13:48:39 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 21 13:48:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 21 13:48:40 compute-0 sudo[101341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:48:40 compute-0 sudo[101341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:40 compute-0 sudo[101341]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:40 compute-0 sudo[101366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:48:40 compute-0 sudo[101366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:40 compute-0 ceph-mon[75031]: 5.1b scrub starts
Jan 21 13:48:40 compute-0 ceph-mon[75031]: 5.1b scrub ok
Jan 21 13:48:40 compute-0 ceph-mon[75031]: pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 21 13:48:40 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 21 13:48:40 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 21 13:48:40 compute-0 sudo[101366]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:48:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:48:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:48:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:48:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:48:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:48:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:48:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:48:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:48:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:48:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:48:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:48:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:48:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:48:41 compute-0 sudo[101423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:48:41 compute-0 sudo[101423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:41 compute-0 sudo[101423]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:41 compute-0 sudo[101448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:48:41 compute-0 sudo[101448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:41 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 21 13:48:41 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.569330363 +0000 UTC m=+0.085103540 container create 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:48:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:41 compute-0 ceph-mon[75031]: 2.f scrub starts
Jan 21 13:48:41 compute-0 ceph-mon[75031]: 2.f scrub ok
Jan 21 13:48:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:48:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:48:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:48:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:48:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:48:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.521441146 +0000 UTC m=+0.037214363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:48:41 compute-0 systemd[1]: Started libpod-conmon-923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd.scope.
Jan 21 13:48:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:48:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.706366794 +0000 UTC m=+0.222140001 container init 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.719163904 +0000 UTC m=+0.234937061 container start 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:48:41 compute-0 beautiful_goldwasser[101503]: 167 167
Jan 21 13:48:41 compute-0 systemd[1]: libpod-923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd.scope: Deactivated successfully.
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.765304036 +0000 UTC m=+0.281077243 container attach 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.767285907 +0000 UTC m=+0.283059074 container died 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 13:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8feffee9f4e8a24d93f948d616d6673b2258680de1fccd5ad13438e7a92e3102-merged.mount: Deactivated successfully.
Jan 21 13:48:41 compute-0 podman[101487]: 2026-01-21 13:48:41.932964508 +0000 UTC m=+0.448737665 container remove 923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldwasser, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:48:41 compute-0 systemd[1]: libpod-conmon-923c41c917c7587beca24c3b9b3c2c072a24831932cad4b3e9ed7a73f5bb0dcd.scope: Deactivated successfully.
Jan 21 13:48:42 compute-0 podman[101529]: 2026-01-21 13:48:42.197458332 +0000 UTC m=+0.089536505 container create 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:48:42 compute-0 podman[101529]: 2026-01-21 13:48:42.147294035 +0000 UTC m=+0.039372238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:48:42 compute-0 systemd[1]: Started libpod-conmon-9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83.scope.
Jan 21 13:48:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:42 compute-0 podman[101529]: 2026-01-21 13:48:42.356784227 +0000 UTC m=+0.248862480 container init 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:48:42 compute-0 podman[101529]: 2026-01-21 13:48:42.368806208 +0000 UTC m=+0.260884421 container start 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:48:42 compute-0 podman[101529]: 2026-01-21 13:48:42.388282031 +0000 UTC m=+0.280360434 container attach 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:48:42 compute-0 upbeat_northcutt[101547]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:48:42 compute-0 upbeat_northcutt[101547]: --> All data devices are unavailable
Jan 21 13:48:42 compute-0 systemd[1]: libpod-9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83.scope: Deactivated successfully.
Jan 21 13:48:42 compute-0 podman[101529]: 2026-01-21 13:48:42.949028738 +0000 UTC m=+0.841106951 container died 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a09912a188c71de9b63a56853105258a2ef7b7133ef93f476b9e34be1f782134-merged.mount: Deactivated successfully.
Jan 21 13:48:43 compute-0 ceph-mon[75031]: 3.1e scrub starts
Jan 21 13:48:43 compute-0 ceph-mon[75031]: 3.1e scrub ok
Jan 21 13:48:43 compute-0 ceph-mon[75031]: pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:43 compute-0 podman[101529]: 2026-01-21 13:48:43.111589559 +0000 UTC m=+1.003667732 container remove 9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 13:48:43 compute-0 systemd[1]: libpod-conmon-9d414490cc3bf5de104525c3798e45de7556cc1aa290dd9fe7f3f2f94246bf83.scope: Deactivated successfully.
Jan 21 13:48:43 compute-0 sudo[101448]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:43 compute-0 sudo[101584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:48:43 compute-0 sudo[101584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:43 compute-0 sudo[101584]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:43 compute-0 sudo[101609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:48:43 compute-0 sudo[101609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:43 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 21 13:48:43 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 21 13:48:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:43 compute-0 podman[101646]: 2026-01-21 13:48:43.665065898 +0000 UTC m=+0.068110881 container create a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 13:48:43 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 21 13:48:43 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 21 13:48:43 compute-0 podman[101646]: 2026-01-21 13:48:43.624185462 +0000 UTC m=+0.027230465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:48:43 compute-0 systemd[1]: Started libpod-conmon-a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae.scope.
Jan 21 13:48:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:48:43 compute-0 podman[101646]: 2026-01-21 13:48:43.791249988 +0000 UTC m=+0.194295021 container init a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:48:43 compute-0 podman[101646]: 2026-01-21 13:48:43.797688654 +0000 UTC m=+0.200733627 container start a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 13:48:43 compute-0 agitated_benz[101662]: 167 167
Jan 21 13:48:43 compute-0 systemd[1]: libpod-a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae.scope: Deactivated successfully.
Jan 21 13:48:43 compute-0 podman[101646]: 2026-01-21 13:48:43.822748921 +0000 UTC m=+0.225793934 container attach a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 13:48:43 compute-0 podman[101646]: 2026-01-21 13:48:43.823181083 +0000 UTC m=+0.226226066 container died a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8359b6be5575aa59c7e77252a7c859dfd42547dea463eaccb1d9fed9b6c5913b-merged.mount: Deactivated successfully.
Jan 21 13:48:44 compute-0 podman[101646]: 2026-01-21 13:48:44.34137179 +0000 UTC m=+0.744416743 container remove a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:48:44 compute-0 ceph-mon[75031]: 10.4 scrub starts
Jan 21 13:48:44 compute-0 ceph-mon[75031]: 10.4 scrub ok
Jan 21 13:48:44 compute-0 systemd[1]: libpod-conmon-a55dfbecaa6838200398e623e9c8182ba1070f72627a67cb4aecb3f30c5ec5ae.scope: Deactivated successfully.
Jan 21 13:48:44 compute-0 podman[101688]: 2026-01-21 13:48:44.510701435 +0000 UTC m=+0.040585530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:48:44 compute-0 podman[101688]: 2026-01-21 13:48:44.673046819 +0000 UTC m=+0.202930854 container create a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:48:44 compute-0 systemd[1]: Started libpod-conmon-a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10.scope.
Jan 21 13:48:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:45 compute-0 podman[101688]: 2026-01-21 13:48:45.241147667 +0000 UTC m=+0.771031712 container init a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:48:45 compute-0 podman[101688]: 2026-01-21 13:48:45.25332455 +0000 UTC m=+0.783208585 container start a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:48:45 compute-0 podman[101688]: 2026-01-21 13:48:45.296884726 +0000 UTC m=+0.826768761 container attach a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:48:45 compute-0 ceph-mon[75031]: 7.1a scrub starts
Jan 21 13:48:45 compute-0 ceph-mon[75031]: 7.1a scrub ok
Jan 21 13:48:45 compute-0 ceph-mon[75031]: pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:45 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 21 13:48:45 compute-0 interesting_bassi[101705]: {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:     "0": [
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:         {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "devices": [
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "/dev/loop3"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             ],
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_name": "ceph_lv0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_size": "21470642176",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "name": "ceph_lv0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "tags": {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cluster_name": "ceph",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.crush_device_class": "",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.encrypted": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.objectstore": "bluestore",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osd_id": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.type": "block",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.vdo": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.with_tpm": "0"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             },
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "type": "block",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "vg_name": "ceph_vg0"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:         }
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:     ],
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:     "1": [
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:         {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "devices": [
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "/dev/loop4"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             ],
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_name": "ceph_lv1",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_size": "21470642176",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "name": "ceph_lv1",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "tags": {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cluster_name": "ceph",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.crush_device_class": "",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.encrypted": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.objectstore": "bluestore",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osd_id": "1",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.type": "block",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.vdo": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.with_tpm": "0"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             },
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "type": "block",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "vg_name": "ceph_vg1"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:         }
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:     ],
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:     "2": [
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:         {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "devices": [
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "/dev/loop5"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             ],
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_name": "ceph_lv2",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_size": "21470642176",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "name": "ceph_lv2",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "tags": {
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.cluster_name": "ceph",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.crush_device_class": "",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.encrypted": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.objectstore": "bluestore",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osd_id": "2",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.type": "block",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.vdo": "0",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:                 "ceph.with_tpm": "0"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             },
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "type": "block",
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:             "vg_name": "ceph_vg2"
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:         }
Jan 21 13:48:45 compute-0 interesting_bassi[101705]:     ]
Jan 21 13:48:45 compute-0 interesting_bassi[101705]: }
Jan 21 13:48:45 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 21 13:48:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:45 compute-0 systemd[1]: libpod-a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10.scope: Deactivated successfully.
Jan 21 13:48:45 compute-0 podman[101688]: 2026-01-21 13:48:45.595982713 +0000 UTC m=+1.125866758 container died a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 13:48:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-263783f6275288d88e1bb81515829c5cf2d9b0fae56efa8e58fc36b870fd07fa-merged.mount: Deactivated successfully.
Jan 21 13:48:45 compute-0 podman[101688]: 2026-01-21 13:48:45.826292253 +0000 UTC m=+1.356176298 container remove a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bassi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:48:45 compute-0 systemd[1]: libpod-conmon-a2a2de3fcef91651e2b82afa333de4840633e7a41ec6fcaf2eb4398335000b10.scope: Deactivated successfully.
Jan 21 13:48:45 compute-0 sudo[101609]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:45 compute-0 sudo[101727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:48:45 compute-0 sudo[101727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:45 compute-0 sudo[101727]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:46 compute-0 sudo[101752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:48:46 compute-0 sudo[101752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.417842416 +0000 UTC m=+0.044023358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:48:46 compute-0 ceph-mon[75031]: 8.15 scrub starts
Jan 21 13:48:46 compute-0 ceph-mon[75031]: 8.15 scrub ok
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.534536931 +0000 UTC m=+0.160717813 container create 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:48:46 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 21 13:48:46 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 21 13:48:46 compute-0 systemd[1]: Started libpod-conmon-87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef.scope.
Jan 21 13:48:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:48:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.695343836 +0000 UTC m=+0.321524768 container init 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.702978123 +0000 UTC m=+0.329159015 container start 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 13:48:46 compute-0 objective_mendel[101804]: 167 167
Jan 21 13:48:46 compute-0 systemd[1]: libpod-87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef.scope: Deactivated successfully.
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.725367232 +0000 UTC m=+0.351548144 container attach 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.725848614 +0000 UTC m=+0.352029506 container died 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 13:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-098b712bd7fb031e63d6f63a5f20d60b2049065a1b65cae29541a8eaf615cdb0-merged.mount: Deactivated successfully.
Jan 21 13:48:46 compute-0 podman[101788]: 2026-01-21 13:48:46.916430008 +0000 UTC m=+0.542610910 container remove 87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:48:46 compute-0 systemd[1]: libpod-conmon-87115658e88f12cf5aef38690a457d3c9369d875b7eaae4f7e201b969bb684ef.scope: Deactivated successfully.
Jan 21 13:48:47 compute-0 podman[101829]: 2026-01-21 13:48:47.175751677 +0000 UTC m=+0.094956824 container create 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 13:48:47 compute-0 podman[101829]: 2026-01-21 13:48:47.123549859 +0000 UTC m=+0.042755056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:48:47 compute-0 systemd[1]: Started libpod-conmon-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope.
Jan 21 13:48:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:48:47 compute-0 podman[101829]: 2026-01-21 13:48:47.337135686 +0000 UTC m=+0.256340833 container init 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:48:47 compute-0 podman[101829]: 2026-01-21 13:48:47.349028394 +0000 UTC m=+0.268233541 container start 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 21 13:48:47 compute-0 podman[101829]: 2026-01-21 13:48:47.377439428 +0000 UTC m=+0.296644575 container attach 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 13:48:47 compute-0 ceph-mon[75031]: pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:47 compute-0 ceph-mon[75031]: 11.5 scrub starts
Jan 21 13:48:47 compute-0 ceph-mon[75031]: 11.5 scrub ok
Jan 21 13:48:47 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 21 13:48:47 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 21 13:48:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:47 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 21 13:48:47 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 21 13:48:48 compute-0 lvm[101926]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:48:48 compute-0 lvm[101926]: VG ceph_vg1 finished
Jan 21 13:48:48 compute-0 lvm[101924]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:48:48 compute-0 lvm[101924]: VG ceph_vg0 finished
Jan 21 13:48:48 compute-0 lvm[101927]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:48:48 compute-0 lvm[101927]: VG ceph_vg2 finished
Jan 21 13:48:48 compute-0 brave_ellis[101846]: {}
Jan 21 13:48:48 compute-0 systemd[1]: libpod-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope: Deactivated successfully.
Jan 21 13:48:48 compute-0 podman[101829]: 2026-01-21 13:48:48.183457542 +0000 UTC m=+1.102662689 container died 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:48:48 compute-0 systemd[1]: libpod-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope: Consumed 1.310s CPU time.
Jan 21 13:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b1f857a68201e15fec42d571fc4e72aa509ed28ce75f7ecad4c9b6a50439f1-merged.mount: Deactivated successfully.
Jan 21 13:48:48 compute-0 podman[101829]: 2026-01-21 13:48:48.335536521 +0000 UTC m=+1.254741668 container remove 88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ellis, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 13:48:48 compute-0 systemd[1]: libpod-conmon-88d42077e4bb60959373f474d059e869f178fc8c98a1bd34d3de0c874c358ad2.scope: Deactivated successfully.
Jan 21 13:48:48 compute-0 sudo[101752]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:48:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:48:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:48:48 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 21 13:48:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:48:48 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 21 13:48:48 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 21 13:48:48 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 21 13:48:48 compute-0 sudo[101945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:48:48 compute-0 sudo[101945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:48:48 compute-0 sudo[101945]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:48 compute-0 ceph-mon[75031]: 3.d scrub starts
Jan 21 13:48:48 compute-0 ceph-mon[75031]: 3.d scrub ok
Jan 21 13:48:48 compute-0 ceph-mon[75031]: pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:48 compute-0 ceph-mon[75031]: 2.8 scrub starts
Jan 21 13:48:48 compute-0 ceph-mon[75031]: 2.8 scrub ok
Jan 21 13:48:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:48:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:48:49 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 21 13:48:49 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 21 13:48:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:49 compute-0 ceph-mon[75031]: 4.1b scrub starts
Jan 21 13:48:49 compute-0 ceph-mon[75031]: 4.1b scrub ok
Jan 21 13:48:49 compute-0 ceph-mon[75031]: 8.5 scrub starts
Jan 21 13:48:49 compute-0 ceph-mon[75031]: 8.5 scrub ok
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:48:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:48:50 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 21 13:48:50 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 21 13:48:50 compute-0 ceph-mon[75031]: 11.7 scrub starts
Jan 21 13:48:50 compute-0 ceph-mon[75031]: 11.7 scrub ok
Jan 21 13:48:50 compute-0 ceph-mon[75031]: pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 21 13:48:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 21 13:48:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:51 compute-0 ceph-mon[75031]: 7.b scrub starts
Jan 21 13:48:51 compute-0 ceph-mon[75031]: 7.b scrub ok
Jan 21 13:48:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:52 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 21 13:48:52 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 21 13:48:52 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 21 13:48:52 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 21 13:48:52 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 21 13:48:52 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 21 13:48:52 compute-0 ceph-mon[75031]: 4.18 scrub starts
Jan 21 13:48:52 compute-0 ceph-mon[75031]: 4.18 scrub ok
Jan 21 13:48:52 compute-0 ceph-mon[75031]: pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:52 compute-0 ceph-mon[75031]: 7.14 scrub starts
Jan 21 13:48:52 compute-0 ceph-mon[75031]: 7.14 scrub ok
Jan 21 13:48:53 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 21 13:48:53 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 21 13:48:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:53 compute-0 ceph-mon[75031]: 4.1a scrub starts
Jan 21 13:48:53 compute-0 ceph-mon[75031]: 4.1a scrub ok
Jan 21 13:48:53 compute-0 ceph-mon[75031]: 10.8 scrub starts
Jan 21 13:48:53 compute-0 ceph-mon[75031]: 10.8 scrub ok
Jan 21 13:48:53 compute-0 ceph-mon[75031]: 3.10 scrub starts
Jan 21 13:48:53 compute-0 ceph-mon[75031]: 3.10 scrub ok
Jan 21 13:48:54 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 21 13:48:54 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 21 13:48:54 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 21 13:48:54 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 21 13:48:54 compute-0 ceph-mon[75031]: pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:54 compute-0 ceph-mon[75031]: 7.16 scrub starts
Jan 21 13:48:54 compute-0 ceph-mon[75031]: 7.16 scrub ok
Jan 21 13:48:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:55 compute-0 ceph-mon[75031]: 11.12 scrub starts
Jan 21 13:48:55 compute-0 ceph-mon[75031]: 11.12 scrub ok
Jan 21 13:48:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 21 13:48:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 21 13:48:56 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 21 13:48:56 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 21 13:48:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:48:56 compute-0 ceph-mon[75031]: pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:56 compute-0 ceph-mon[75031]: 8.19 scrub starts
Jan 21 13:48:56 compute-0 ceph-mon[75031]: 8.19 scrub ok
Jan 21 13:48:57 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 21 13:48:57 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 21 13:48:57 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 21 13:48:57 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 21 13:48:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:57 compute-0 sudo[101196]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:57 compute-0 ceph-mon[75031]: 2.18 scrub starts
Jan 21 13:48:57 compute-0 ceph-mon[75031]: 2.18 scrub ok
Jan 21 13:48:57 compute-0 ceph-mon[75031]: 3.13 scrub starts
Jan 21 13:48:57 compute-0 ceph-mon[75031]: 3.13 scrub ok
Jan 21 13:48:58 compute-0 sudo[102119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgejppyhyqgpcoxjblmbwgurrfusjvfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003337.8954606-132-202323434265579/AnsiballZ_command.py'
Jan 21 13:48:58 compute-0 sudo[102119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:48:58 compute-0 python3.9[102121]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:48:58 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 21 13:48:58 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 21 13:48:58 compute-0 ceph-mon[75031]: 5.1e scrub starts
Jan 21 13:48:58 compute-0 ceph-mon[75031]: 5.1e scrub ok
Jan 21 13:48:58 compute-0 ceph-mon[75031]: pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:59 compute-0 sudo[102119]: pam_unix(sudo:session): session closed for user root
Jan 21 13:48:59 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 21 13:48:59 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 21 13:48:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:48:59 compute-0 ceph-mon[75031]: 2.19 scrub starts
Jan 21 13:48:59 compute-0 ceph-mon[75031]: 2.19 scrub ok
Jan 21 13:48:59 compute-0 ceph-mon[75031]: 7.17 scrub starts
Jan 21 13:48:59 compute-0 ceph-mon[75031]: 7.17 scrub ok
Jan 21 13:49:00 compute-0 sudo[102406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxlraagucnfohfukpzzxwujnjfawesr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003339.3616326-140-158078541885499/AnsiballZ_selinux.py'
Jan 21 13:49:00 compute-0 sudo[102406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:00 compute-0 python3.9[102408]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 21 13:49:00 compute-0 sudo[102406]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:00 compute-0 ceph-mon[75031]: pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:00 compute-0 sudo[102558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grgvlplzrkqpylaaadlvkyvuwccsxgzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003340.662254-151-21623023263876/AnsiballZ_command.py'
Jan 21 13:49:00 compute-0 sudo[102558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:01 compute-0 python3.9[102560]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 21 13:49:01 compute-0 sudo[102558]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:01 compute-0 sudo[102710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcozcgfgljlirfpnvfgexkzlxaywemk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003341.367231-159-201088577241161/AnsiballZ_file.py'
Jan 21 13:49:01 compute-0 sudo[102710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:01 compute-0 python3.9[102712]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:49:01 compute-0 sudo[102710]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:02 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 21 13:49:02 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 21 13:49:02 compute-0 sudo[102862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyjzmgzbkbzxrlnaxmcgtpyhdbauvbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003342.1181612-167-90917916816467/AnsiballZ_mount.py'
Jan 21 13:49:02 compute-0 sudo[102862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:02 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 21 13:49:02 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 21 13:49:02 compute-0 ceph-mon[75031]: pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:02 compute-0 python3.9[102864]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 21 13:49:02 compute-0 sudo[102862]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:03 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 21 13:49:03 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 21 13:49:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:03 compute-0 ceph-mon[75031]: 3.1d scrub starts
Jan 21 13:49:03 compute-0 ceph-mon[75031]: 3.1d scrub ok
Jan 21 13:49:03 compute-0 ceph-mon[75031]: 11.10 scrub starts
Jan 21 13:49:03 compute-0 ceph-mon[75031]: 11.10 scrub ok
Jan 21 13:49:03 compute-0 sudo[103014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apqgrvnwqexpdbraygkgxdzjrwwqyciq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003343.6516755-195-825729760107/AnsiballZ_file.py'
Jan 21 13:49:03 compute-0 sudo[103014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:04 compute-0 python3.9[103016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:49:04 compute-0 sudo[103014]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:04 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 21 13:49:04 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 21 13:49:04 compute-0 sudo[103166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smaxpjmdkjwneuhjlbjydhuxxvtcxgwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003344.3772168-203-62095524658583/AnsiballZ_stat.py'
Jan 21 13:49:04 compute-0 sudo[103166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:04 compute-0 python3.9[103168]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:49:04 compute-0 sudo[103166]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:05 compute-0 sudo[103244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzqahfrbpzxuzwrufaskfryhzqjtntmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003344.3772168-203-62095524658583/AnsiballZ_file.py'
Jan 21 13:49:05 compute-0 sudo[103244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:05 compute-0 ceph-mon[75031]: 11.15 scrub starts
Jan 21 13:49:05 compute-0 ceph-mon[75031]: 11.15 scrub ok
Jan 21 13:49:05 compute-0 ceph-mon[75031]: pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:05 compute-0 ceph-mon[75031]: 7.10 scrub starts
Jan 21 13:49:05 compute-0 ceph-mon[75031]: 7.10 scrub ok
Jan 21 13:49:05 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 21 13:49:05 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 21 13:49:05 compute-0 python3.9[103246]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:49:05 compute-0 sudo[103244]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:06 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 21 13:49:06 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 21 13:49:06 compute-0 sudo[103396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcswpccdcymspicmjwimjhsudizfbpos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003346.2334678-224-21680127832737/AnsiballZ_stat.py'
Jan 21 13:49:06 compute-0 sudo[103396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:06 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 21 13:49:06 compute-0 python3.9[103398]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:49:06 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 21 13:49:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:06 compute-0 sudo[103396]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:06 compute-0 ceph-mon[75031]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:06 compute-0 ceph-mon[75031]: 2.b scrub starts
Jan 21 13:49:06 compute-0 ceph-mon[75031]: 2.b scrub ok
Jan 21 13:49:07 compute-0 sudo[103550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wglvjxxxwvaepptqiylujsnfvfmadnpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003347.1208212-237-181673846347713/AnsiballZ_getent.py'
Jan 21 13:49:07 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 21 13:49:07 compute-0 sudo[103550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:07 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 21 13:49:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:07 compute-0 ceph-mon[75031]: 8.11 scrub starts
Jan 21 13:49:07 compute-0 ceph-mon[75031]: 8.11 scrub ok
Jan 21 13:49:07 compute-0 ceph-mon[75031]: 5.4 scrub starts
Jan 21 13:49:07 compute-0 ceph-mon[75031]: 5.4 scrub ok
Jan 21 13:49:07 compute-0 ceph-mon[75031]: 3.14 scrub starts
Jan 21 13:49:07 compute-0 ceph-mon[75031]: 3.14 scrub ok
Jan 21 13:49:07 compute-0 python3.9[103552]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 21 13:49:07 compute-0 sudo[103550]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:08 compute-0 sudo[103703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agbuceshipgllxeulctlffrbkxjtseqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003348.0712972-247-39502183174489/AnsiballZ_getent.py'
Jan 21 13:49:08 compute-0 sudo[103703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:08 compute-0 python3.9[103705]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 21 13:49:08 compute-0 sudo[103703]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:08 compute-0 ceph-mon[75031]: pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:09 compute-0 sudo[103856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvcadyljrppgmamzczsqhyoxxjyhbusk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003348.8196743-255-57050118099531/AnsiballZ_group.py'
Jan 21 13:49:09 compute-0 sudo[103856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:09 compute-0 python3.9[103858]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 13:49:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:09 compute-0 sudo[103856]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:10 compute-0 sudo[104008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoziaptomaaxmzkonwwwmpxfpfvziwhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003349.840934-264-34629400395896/AnsiballZ_file.py'
Jan 21 13:49:10 compute-0 sudo[104008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:10 compute-0 python3.9[104010]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 21 13:49:10 compute-0 sudo[104008]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 21 13:49:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 21 13:49:10 compute-0 ceph-mon[75031]: pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:10 compute-0 ceph-mon[75031]: 7.1f scrub starts
Jan 21 13:49:10 compute-0 ceph-mon[75031]: 7.1f scrub ok
Jan 21 13:49:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:49:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:49:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:49:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:49:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:49:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:49:11 compute-0 sudo[104160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzgolziqgydzrshskzxibnfdpfvvqfnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003350.7090259-275-241806820535195/AnsiballZ_dnf.py'
Jan 21 13:49:11 compute-0 sudo[104160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:11 compute-0 python3.9[104162]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:49:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:12 compute-0 sudo[104160]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:12 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 21 13:49:12 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 21 13:49:12 compute-0 ceph-mon[75031]: pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:13 compute-0 sudo[104313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enzopbjizfhtppwmqvpacupbicphqxvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003352.7116098-283-124213488586530/AnsiballZ_file.py'
Jan 21 13:49:13 compute-0 sudo[104313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:13 compute-0 python3.9[104315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:49:13 compute-0 sudo[104313]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:13 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 21 13:49:13 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 21 13:49:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:13 compute-0 sudo[104465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqvzlbsoecmktrngtxwwbvnjmwrving ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003353.3811922-291-35122546586553/AnsiballZ_stat.py'
Jan 21 13:49:13 compute-0 sudo[104465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:13 compute-0 python3.9[104467]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:49:13 compute-0 ceph-mon[75031]: 7.c scrub starts
Jan 21 13:49:13 compute-0 ceph-mon[75031]: 7.c scrub ok
Jan 21 13:49:13 compute-0 sudo[104465]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:14 compute-0 sudo[104543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfsigjhecgiuvpowdcjzjmhuwcmigsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003353.3811922-291-35122546586553/AnsiballZ_file.py'
Jan 21 13:49:14 compute-0 sudo[104543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:14 compute-0 python3.9[104545]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:49:14 compute-0 sudo[104543]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:14 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 21 13:49:14 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 21 13:49:14 compute-0 ceph-mon[75031]: 3.7 scrub starts
Jan 21 13:49:14 compute-0 ceph-mon[75031]: 3.7 scrub ok
Jan 21 13:49:14 compute-0 ceph-mon[75031]: pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:14 compute-0 ceph-mon[75031]: 7.4 scrub starts
Jan 21 13:49:14 compute-0 ceph-mon[75031]: 7.4 scrub ok
Jan 21 13:49:15 compute-0 sudo[104695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sttwdtngnwrndddlrxqrpjyqqrzovtzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003354.690915-304-33946434911036/AnsiballZ_stat.py'
Jan 21 13:49:15 compute-0 sudo[104695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:15 compute-0 python3.9[104697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:49:15 compute-0 sudo[104695]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:15 compute-0 sudo[104773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pacfwitokmdalnzbafwglulhjjognidy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003354.690915-304-33946434911036/AnsiballZ_file.py'
Jan 21 13:49:15 compute-0 sudo[104773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:15 compute-0 python3.9[104775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:49:15 compute-0 sudo[104773]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:16 compute-0 sudo[104925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqgnufminpeyzelghydjuyhglaslptxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003356.1026545-319-232504854686558/AnsiballZ_dnf.py'
Jan 21 13:49:16 compute-0 sudo[104925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:16 compute-0 python3.9[104927]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:49:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:16 compute-0 ceph-mon[75031]: pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:17 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 21 13:49:17 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 21 13:49:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 21 13:49:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 21 13:49:17 compute-0 sudo[104925]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:17 compute-0 ceph-mon[75031]: 11.1d scrub starts
Jan 21 13:49:17 compute-0 ceph-mon[75031]: 11.1d scrub ok
Jan 21 13:49:17 compute-0 ceph-mon[75031]: 11.4 scrub starts
Jan 21 13:49:17 compute-0 ceph-mon[75031]: 11.4 scrub ok
Jan 21 13:49:18 compute-0 python3.9[105078]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:49:18 compute-0 ceph-mon[75031]: pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:19 compute-0 python3.9[105230]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 21 13:49:19 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 21 13:49:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:19 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 21 13:49:20 compute-0 ceph-mon[75031]: 11.3 scrub starts
Jan 21 13:49:20 compute-0 ceph-mon[75031]: 11.3 scrub ok
Jan 21 13:49:20 compute-0 python3.9[105380]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:49:20 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 21 13:49:20 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 21 13:49:21 compute-0 sudo[105530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptzmfyxotxffukmkvztzmlqklmleedte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003360.7838671-360-188824998851349/AnsiballZ_systemd.py'
Jan 21 13:49:21 compute-0 sudo[105530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:21 compute-0 ceph-mon[75031]: pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:21 compute-0 ceph-mon[75031]: 8.b scrub starts
Jan 21 13:49:21 compute-0 ceph-mon[75031]: 8.b scrub ok
Jan 21 13:49:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:21 compute-0 python3.9[105532]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:49:21 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 21 13:49:21 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 21 13:49:21 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 21 13:49:21 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 21 13:49:22 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 21 13:49:22 compute-0 sudo[105530]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:22 compute-0 ceph-mon[75031]: pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:22 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 21 13:49:22 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 21 13:49:22 compute-0 python3.9[105693]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 21 13:49:23 compute-0 ceph-mon[75031]: 8.1e scrub starts
Jan 21 13:49:23 compute-0 ceph-mon[75031]: 8.1e scrub ok
Jan 21 13:49:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 21 13:49:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 21 13:49:24 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 21 13:49:24 compute-0 ceph-mon[75031]: pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:24 compute-0 ceph-mon[75031]: 11.14 scrub starts
Jan 21 13:49:24 compute-0 ceph-mon[75031]: 11.14 scrub ok
Jan 21 13:49:24 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 21 13:49:24 compute-0 sudo[105843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehbxgochhaulmwvxaviwrlftsdsmlmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003364.4890475-417-80335651598562/AnsiballZ_systemd.py'
Jan 21 13:49:24 compute-0 sudo[105843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:25 compute-0 python3.9[105845]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:49:25 compute-0 sudo[105843]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:25 compute-0 ceph-mon[75031]: 7.12 scrub starts
Jan 21 13:49:25 compute-0 ceph-mon[75031]: 7.12 scrub ok
Jan 21 13:49:25 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 21 13:49:25 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 21 13:49:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:25 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 21 13:49:25 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 21 13:49:25 compute-0 sudo[105997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqpjluxpmbxahsazsmhbhkmcpfgfoeia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003365.3589263-417-98634567819079/AnsiballZ_systemd.py'
Jan 21 13:49:25 compute-0 sudo[105997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:25 compute-0 python3.9[105999]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:49:26 compute-0 sudo[105997]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:26 compute-0 sshd-session[99284]: Connection closed by 192.168.122.30 port 38186
Jan 21 13:49:26 compute-0 sshd-session[99281]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:49:26 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 21 13:49:26 compute-0 systemd[1]: session-35.scope: Consumed 1min 7.322s CPU time.
Jan 21 13:49:26 compute-0 systemd-logind[780]: Session 35 logged out. Waiting for processes to exit.
Jan 21 13:49:26 compute-0 systemd-logind[780]: Removed session 35.
Jan 21 13:49:26 compute-0 ceph-mon[75031]: 5.13 scrub starts
Jan 21 13:49:26 compute-0 ceph-mon[75031]: 3.5 scrub starts
Jan 21 13:49:26 compute-0 ceph-mon[75031]: pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:26 compute-0 ceph-mon[75031]: 5.13 scrub ok
Jan 21 13:49:26 compute-0 ceph-mon[75031]: 3.5 scrub ok
Jan 21 13:49:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:26 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 21 13:49:26 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 21 13:49:27 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 21 13:49:27 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 21 13:49:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:27 compute-0 ceph-mon[75031]: 7.18 scrub starts
Jan 21 13:49:27 compute-0 ceph-mon[75031]: 7.18 scrub ok
Jan 21 13:49:28 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 21 13:49:28 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 21 13:49:28 compute-0 ceph-mon[75031]: 11.b scrub starts
Jan 21 13:49:28 compute-0 ceph-mon[75031]: 11.b scrub ok
Jan 21 13:49:28 compute-0 ceph-mon[75031]: pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:30 compute-0 ceph-mon[75031]: 10.7 scrub starts
Jan 21 13:49:30 compute-0 ceph-mon[75031]: 10.7 scrub ok
Jan 21 13:49:30 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 21 13:49:30 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 21 13:49:31 compute-0 ceph-mon[75031]: pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:31 compute-0 ceph-mon[75031]: 11.d scrub starts
Jan 21 13:49:31 compute-0 ceph-mon[75031]: 11.d scrub ok
Jan 21 13:49:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:31 compute-0 sshd-session[106026]: Accepted publickey for zuul from 192.168.122.30 port 44042 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:49:31 compute-0 systemd-logind[780]: New session 36 of user zuul.
Jan 21 13:49:31 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 21 13:49:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:31 compute-0 sshd-session[106026]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:49:32 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 21 13:49:32 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 21 13:49:32 compute-0 ceph-mon[75031]: pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:32 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 21 13:49:32 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 21 13:49:32 compute-0 python3.9[106179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:49:33 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 21 13:49:33 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 21 13:49:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:33 compute-0 ceph-mon[75031]: 3.8 scrub starts
Jan 21 13:49:33 compute-0 ceph-mon[75031]: 3.8 scrub ok
Jan 21 13:49:33 compute-0 ceph-mon[75031]: 7.9 scrub starts
Jan 21 13:49:33 compute-0 ceph-mon[75031]: 7.9 scrub ok
Jan 21 13:49:33 compute-0 ceph-mon[75031]: 2.17 scrub starts
Jan 21 13:49:33 compute-0 ceph-mon[75031]: 2.17 scrub ok
Jan 21 13:49:33 compute-0 sudo[106333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apritshcmtmuvixxnbrweywpgihsicih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003373.4620075-31-132511574027375/AnsiballZ_getent.py'
Jan 21 13:49:33 compute-0 sudo[106333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:34 compute-0 python3.9[106335]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 21 13:49:34 compute-0 sudo[106333]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:34 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 21 13:49:34 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 21 13:49:34 compute-0 ceph-mon[75031]: pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:34 compute-0 sudo[106486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxnacylyunbfgrwhpqpvbpkdodmjqsvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003374.476162-43-152185395802895/AnsiballZ_setup.py'
Jan 21 13:49:34 compute-0 sudo[106486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:35 compute-0 python3.9[106488]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:49:35 compute-0 sudo[106486]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:35 compute-0 sudo[106570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfoxcjcwakxsgukbvzcikyoojspwhfor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003374.476162-43-152185395802895/AnsiballZ_dnf.py'
Jan 21 13:49:35 compute-0 sudo[106570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:36 compute-0 python3.9[106572]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 13:49:36 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 21 13:49:36 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 21 13:49:36 compute-0 ceph-mon[75031]: 8.2 scrub starts
Jan 21 13:49:36 compute-0 ceph-mon[75031]: 8.2 scrub ok
Jan 21 13:49:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:37 compute-0 sudo[106570]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:37 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 21 13:49:37 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 21 13:49:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:37 compute-0 ceph-mon[75031]: pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:37 compute-0 ceph-mon[75031]: 11.8 scrub starts
Jan 21 13:49:37 compute-0 ceph-mon[75031]: 11.8 scrub ok
Jan 21 13:49:38 compute-0 sudo[106723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-romnqdgqqgfmgxfqchjgqcytosvttian ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003377.7596748-57-275696975461035/AnsiballZ_dnf.py'
Jan 21 13:49:38 compute-0 sudo[106723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:38 compute-0 python3.9[106725]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:49:38 compute-0 ceph-mon[75031]: 4.e scrub starts
Jan 21 13:49:38 compute-0 ceph-mon[75031]: 4.e scrub ok
Jan 21 13:49:38 compute-0 ceph-mon[75031]: pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:39 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 21 13:49:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:49:39
Jan 21 13:49:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:49:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:49:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', '.mgr']
Jan 21 13:49:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:49:39 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 21 13:49:39 compute-0 sudo[106723]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:39 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 21 13:49:39 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 21 13:49:40 compute-0 sudo[106876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bonttdnafyimjjijxyikdnctsguyghkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003379.7740083-65-192565388019302/AnsiballZ_systemd.py'
Jan 21 13:49:40 compute-0 sudo[106876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:40 compute-0 ceph-mon[75031]: 7.2 scrub starts
Jan 21 13:49:40 compute-0 ceph-mon[75031]: 7.2 scrub ok
Jan 21 13:49:40 compute-0 ceph-mon[75031]: pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:40 compute-0 python3.9[106878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:49:40 compute-0 sudo[106876]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:49:40 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:49:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:41 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 21 13:49:41 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 21 13:49:41 compute-0 ceph-mon[75031]: 5.7 scrub starts
Jan 21 13:49:41 compute-0 ceph-mon[75031]: 5.7 scrub ok
Jan 21 13:49:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:41 compute-0 python3.9[107031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:49:42 compute-0 sudo[107181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmdmtaoqaqzbtreqkldwyemcibtxuwge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003382.0697312-83-177206407297134/AnsiballZ_sefcontext.py'
Jan 21 13:49:42 compute-0 sudo[107181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:42 compute-0 ceph-mon[75031]: pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:42 compute-0 ceph-mon[75031]: 2.15 scrub starts
Jan 21 13:49:42 compute-0 ceph-mon[75031]: 2.15 scrub ok
Jan 21 13:49:42 compute-0 python3.9[107183]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 21 13:49:43 compute-0 sudo[107181]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:43 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 21 13:49:43 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 21 13:49:44 compute-0 python3.9[107333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:49:44 compute-0 ceph-mon[75031]: pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:44 compute-0 ceph-mon[75031]: 5.12 scrub starts
Jan 21 13:49:44 compute-0 ceph-mon[75031]: 5.12 scrub ok
Jan 21 13:49:44 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 21 13:49:44 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 21 13:49:44 compute-0 sudo[107489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnfuttlpmghnfubgfbhrwdywscbroarz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003384.5036309-101-214400622887107/AnsiballZ_dnf.py'
Jan 21 13:49:44 compute-0 sudo[107489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:45 compute-0 python3.9[107491]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:49:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:45 compute-0 ceph-mon[75031]: 3.f scrub starts
Jan 21 13:49:45 compute-0 ceph-mon[75031]: 3.f scrub ok
Jan 21 13:49:45 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 21 13:49:45 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 21 13:49:46 compute-0 sudo[107489]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:46 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 21 13:49:46 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 21 13:49:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:46 compute-0 ceph-mon[75031]: pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:46 compute-0 ceph-mon[75031]: 3.c scrub starts
Jan 21 13:49:46 compute-0 ceph-mon[75031]: 3.c scrub ok
Jan 21 13:49:46 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 21 13:49:46 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 21 13:49:47 compute-0 sudo[107642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umcvzuollynpyawhxmnuokxcbexnkvge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003386.6094036-109-66280587722309/AnsiballZ_command.py'
Jan 21 13:49:47 compute-0 sudo[107642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:47 compute-0 python3.9[107644]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:49:47 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 21 13:49:47 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 21 13:49:47 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 21 13:49:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:47 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 21 13:49:47 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 21 13:49:47 compute-0 ceph-mon[75031]: 7.5 scrub starts
Jan 21 13:49:47 compute-0 ceph-mon[75031]: 7.5 scrub ok
Jan 21 13:49:47 compute-0 ceph-mon[75031]: 8.9 scrub starts
Jan 21 13:49:47 compute-0 ceph-mon[75031]: 8.9 scrub ok
Jan 21 13:49:47 compute-0 ceph-mon[75031]: 5.16 scrub starts
Jan 21 13:49:47 compute-0 ceph-mon[75031]: 5.16 scrub ok
Jan 21 13:49:47 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 21 13:49:48 compute-0 sudo[107642]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:48 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 21 13:49:48 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 21 13:49:48 compute-0 sudo[107877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:49:48 compute-0 sudo[107877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:48 compute-0 sudo[107877]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:48 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 21 13:49:48 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 21 13:49:48 compute-0 sudo[107928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:49:48 compute-0 sudo[107928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:48 compute-0 ceph-mon[75031]: 7.1 scrub starts
Jan 21 13:49:48 compute-0 ceph-mon[75031]: 7.1 scrub ok
Jan 21 13:49:48 compute-0 ceph-mon[75031]: pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:48 compute-0 ceph-mon[75031]: 7.6 scrub starts
Jan 21 13:49:48 compute-0 ceph-mon[75031]: 7.6 scrub ok
Jan 21 13:49:48 compute-0 ceph-mon[75031]: 10.1a scrub starts
Jan 21 13:49:48 compute-0 ceph-mon[75031]: 10.1a scrub ok
Jan 21 13:49:48 compute-0 sudo[107979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaaabzusaavsatjagfaznkwpfgrrhyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003388.2864437-117-72954995229330/AnsiballZ_file.py'
Jan 21 13:49:48 compute-0 sudo[107979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:49 compute-0 python3.9[107981]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 21 13:49:49 compute-0 sudo[107979]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:49 compute-0 sudo[107928]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:49:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:49:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:49:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:49:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:49:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:49:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:49:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:49:49 compute-0 sudo[108091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:49:49 compute-0 sudo[108091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:49 compute-0 sudo[108091]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:49 compute-0 sudo[108134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:49:49 compute-0 sudo[108134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:49 compute-0 python3.9[108214]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:49:49 compute-0 ceph-mon[75031]: 3.3 scrub starts
Jan 21 13:49:49 compute-0 ceph-mon[75031]: 3.3 scrub ok
Jan 21 13:49:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:49:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:49:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.812869083 +0000 UTC m=+0.079944777 container create e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:49:49 compute-0 systemd[76413]: Created slice User Background Tasks Slice.
Jan 21 13:49:49 compute-0 systemd[76413]: Starting Cleanup of User's Temporary Files and Directories...
Jan 21 13:49:49 compute-0 systemd[1]: Started libpod-conmon-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope.
Jan 21 13:49:49 compute-0 systemd[76413]: Finished Cleanup of User's Temporary Files and Directories.
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.774205996 +0000 UTC m=+0.041281680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:49:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.916625008 +0000 UTC m=+0.183700732 container init e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.923596955 +0000 UTC m=+0.190672629 container start e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.927521928 +0000 UTC m=+0.194597632 container attach e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:49:49 compute-0 heuristic_goldwasser[108248]: 167 167
Jan 21 13:49:49 compute-0 systemd[1]: libpod-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope: Deactivated successfully.
Jan 21 13:49:49 compute-0 conmon[108248]: conmon e59be3fe22026dbe90e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope/container/memory.events
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.930766286 +0000 UTC m=+0.197841990 container died e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d5b4334d94eb08db5cec0c2fc3a62f4add7b7eba07fbd49bd9c339d7a740a62-merged.mount: Deactivated successfully.
Jan 21 13:49:49 compute-0 podman[108227]: 2026-01-21 13:49:49.972422123 +0000 UTC m=+0.239497787 container remove e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:49:49 compute-0 systemd[1]: libpod-conmon-e59be3fe22026dbe90e4dd9090f5e770f35daa871bd86a0bb06b9e95ae9425a3.scope: Deactivated successfully.
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.167121077 +0000 UTC m=+0.057415716 container create dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:49:50 compute-0 systemd[1]: Started libpod-conmon-dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2.scope.
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.142831345 +0000 UTC m=+0.033126064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:49:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.265398651 +0000 UTC m=+0.155693320 container init dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.278688079 +0000 UTC m=+0.168982728 container start dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.282589112 +0000 UTC m=+0.172883761 container attach dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:49:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:49:50 compute-0 sudo[108441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvxsjtzysaazavjqfxoyilbrukllcsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003390.022037-133-78873289489256/AnsiballZ_dnf.py'
Jan 21 13:49:50 compute-0 sudo[108441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:50 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 21 13:49:50 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 21 13:49:50 compute-0 python3.9[108443]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:49:50 compute-0 epic_hugle[108403]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:49:50 compute-0 epic_hugle[108403]: --> All data devices are unavailable
Jan 21 13:49:50 compute-0 systemd[1]: libpod-dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2.scope: Deactivated successfully.
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.809481022 +0000 UTC m=+0.699775731 container died dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a7b749a1626a1f6856d4cb0817d560a098d48aef9c2a0a9517a6a777a93df0-merged.mount: Deactivated successfully.
Jan 21 13:49:50 compute-0 podman[108347]: 2026-01-21 13:49:50.870961724 +0000 UTC m=+0.761256393 container remove dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hugle, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:49:50 compute-0 systemd[1]: libpod-conmon-dce1ac073f911d04062fb4dace876b55243777f889f53f89e1dd7c95c63c06f2.scope: Deactivated successfully.
Jan 21 13:49:50 compute-0 sudo[108134]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:51 compute-0 sudo[108474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:49:51 compute-0 sudo[108474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:51 compute-0 sudo[108474]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:51 compute-0 sudo[108499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:49:51 compute-0 sudo[108499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:51 compute-0 ceph-mon[75031]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:51 compute-0 ceph-mon[75031]: 4.1 scrub starts
Jan 21 13:49:51 compute-0 ceph-mon[75031]: 4.1 scrub ok
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.41979837 +0000 UTC m=+0.062401965 container create 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 13:49:51 compute-0 systemd[1]: Started libpod-conmon-9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6.scope.
Jan 21 13:49:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.39310202 +0000 UTC m=+0.035705615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.495484533 +0000 UTC m=+0.138088188 container init 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.507776327 +0000 UTC m=+0.150379892 container start 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:49:51 compute-0 focused_kilby[108552]: 167 167
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.511679031 +0000 UTC m=+0.154282696 container attach 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:49:51 compute-0 systemd[1]: libpod-9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6.scope: Deactivated successfully.
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.513962475 +0000 UTC m=+0.156566070 container died 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1850a9cb523f8bbe592b2dcf648719ee742657a7a681b4f6c1ec70e4eae78315-merged.mount: Deactivated successfully.
Jan 21 13:49:51 compute-0 podman[108536]: 2026-01-21 13:49:51.566855612 +0000 UTC m=+0.209459177 container remove 9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_kilby, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:49:51 compute-0 systemd[1]: libpod-conmon-9fab9c02571c18b2982089fc86bd4ad50c655f073a4dd320ccdbdd9160c4e6b6.scope: Deactivated successfully.
Jan 21 13:49:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:51 compute-0 podman[108576]: 2026-01-21 13:49:51.753696258 +0000 UTC m=+0.057293674 container create 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:49:51 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 21 13:49:51 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 21 13:49:51 compute-0 sudo[108441]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:51 compute-0 systemd[1]: Started libpod-conmon-9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6.scope.
Jan 21 13:49:51 compute-0 podman[108576]: 2026-01-21 13:49:51.724260543 +0000 UTC m=+0.027858009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:49:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:51 compute-0 podman[108576]: 2026-01-21 13:49:51.867030662 +0000 UTC m=+0.170628078 container init 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 13:49:51 compute-0 podman[108576]: 2026-01-21 13:49:51.873900187 +0000 UTC m=+0.177497563 container start 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 13:49:51 compute-0 podman[108576]: 2026-01-21 13:49:51.887743857 +0000 UTC m=+0.191341243 container attach 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]: {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:     "0": [
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:         {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "devices": [
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "/dev/loop3"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             ],
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_name": "ceph_lv0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_size": "21470642176",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "name": "ceph_lv0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "tags": {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cluster_name": "ceph",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.crush_device_class": "",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.encrypted": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.objectstore": "bluestore",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osd_id": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.type": "block",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.vdo": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.with_tpm": "0"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             },
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "type": "block",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "vg_name": "ceph_vg0"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:         }
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:     ],
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:     "1": [
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:         {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "devices": [
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "/dev/loop4"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             ],
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_name": "ceph_lv1",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_size": "21470642176",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "name": "ceph_lv1",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "tags": {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cluster_name": "ceph",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.crush_device_class": "",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.encrypted": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.objectstore": "bluestore",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osd_id": "1",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.type": "block",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.vdo": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.with_tpm": "0"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             },
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "type": "block",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "vg_name": "ceph_vg1"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:         }
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:     ],
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:     "2": [
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:         {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "devices": [
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "/dev/loop5"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             ],
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_name": "ceph_lv2",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_size": "21470642176",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "name": "ceph_lv2",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "tags": {
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.cluster_name": "ceph",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.crush_device_class": "",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.encrypted": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.objectstore": "bluestore",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osd_id": "2",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.type": "block",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.vdo": "0",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:                 "ceph.with_tpm": "0"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             },
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "type": "block",
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:             "vg_name": "ceph_vg2"
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:         }
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]:     ]
Jan 21 13:49:52 compute-0 hardcore_merkle[108592]: }
Jan 21 13:49:52 compute-0 systemd[1]: libpod-9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6.scope: Deactivated successfully.
Jan 21 13:49:52 compute-0 podman[108576]: 2026-01-21 13:49:52.309882189 +0000 UTC m=+0.613479635 container died 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 13:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba31297f7b70d77546bf506a5ed7ebba311ad73553f8cf5e22928afda4b9c84-merged.mount: Deactivated successfully.
Jan 21 13:49:52 compute-0 sudo[108751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zowpoatzzveyzfdpuqoemmidhodmrjwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003392.0070193-142-27082732019408/AnsiballZ_dnf.py'
Jan 21 13:49:52 compute-0 sudo[108751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:52 compute-0 podman[108576]: 2026-01-21 13:49:52.360502171 +0000 UTC m=+0.664099547 container remove 9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:49:52 compute-0 systemd[1]: libpod-conmon-9044728ee2f317b7bacf0edf75bb38d7a55c4bfa1abb875448e7440548e098f6.scope: Deactivated successfully.
Jan 21 13:49:52 compute-0 ceph-mon[75031]: 3.1 scrub starts
Jan 21 13:49:52 compute-0 ceph-mon[75031]: 3.1 scrub ok
Jan 21 13:49:52 compute-0 sudo[108499]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:52 compute-0 sudo[108764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:49:52 compute-0 sudo[108764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:52 compute-0 sudo[108764]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:52 compute-0 sudo[108789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:49:52 compute-0 sudo[108789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:52 compute-0 python3.9[108763]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.780808628 +0000 UTC m=+0.040759218 container create d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:49:52 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 21 13:49:52 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 21 13:49:52 compute-0 systemd[1]: Started libpod-conmon-d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339.scope.
Jan 21 13:49:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.858901138 +0000 UTC m=+0.118851788 container init d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.76379119 +0000 UTC m=+0.023741780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.866013378 +0000 UTC m=+0.125963948 container start d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.869584624 +0000 UTC m=+0.129535274 container attach d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:49:52 compute-0 focused_brown[108844]: 167 167
Jan 21 13:49:52 compute-0 systemd[1]: libpod-d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339.scope: Deactivated successfully.
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.871547491 +0000 UTC m=+0.131498051 container died d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-089a9b083ac005eb86a6541851716f35bb5d6a9a44ebbe86350ef8a493a6b906-merged.mount: Deactivated successfully.
Jan 21 13:49:52 compute-0 podman[108827]: 2026-01-21 13:49:52.908784703 +0000 UTC m=+0.168735263 container remove d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:49:52 compute-0 systemd[1]: libpod-conmon-d290cddd15cdb4a5f853d7f5f4a5cff758879a2debefdd2a9fc6f318a9bef339.scope: Deactivated successfully.
Jan 21 13:49:53 compute-0 podman[108867]: 2026-01-21 13:49:53.067137276 +0000 UTC m=+0.042010297 container create 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 13:49:53 compute-0 systemd[1]: Started libpod-conmon-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope.
Jan 21 13:49:53 compute-0 podman[108867]: 2026-01-21 13:49:53.047660119 +0000 UTC m=+0.022533160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:49:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:49:53 compute-0 podman[108867]: 2026-01-21 13:49:53.167746936 +0000 UTC m=+0.142619977 container init 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:49:53 compute-0 podman[108867]: 2026-01-21 13:49:53.176956506 +0000 UTC m=+0.151829517 container start 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:49:53 compute-0 podman[108867]: 2026-01-21 13:49:53.180654105 +0000 UTC m=+0.155527186 container attach 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:49:53 compute-0 ceph-mon[75031]: pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:53 compute-0 ceph-mon[75031]: 8.10 scrub starts
Jan 21 13:49:53 compute-0 ceph-mon[75031]: 8.10 scrub ok
Jan 21 13:49:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:53 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 21 13:49:53 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 21 13:49:53 compute-0 lvm[108963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:49:53 compute-0 lvm[108963]: VG ceph_vg0 finished
Jan 21 13:49:53 compute-0 lvm[108964]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:49:53 compute-0 lvm[108964]: VG ceph_vg1 finished
Jan 21 13:49:53 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 21 13:49:53 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 21 13:49:53 compute-0 lvm[108966]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:49:53 compute-0 lvm[108966]: VG ceph_vg2 finished
Jan 21 13:49:53 compute-0 sudo[108751]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:53 compute-0 loving_cerf[108884]: {}
Jan 21 13:49:54 compute-0 systemd[1]: libpod-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope: Deactivated successfully.
Jan 21 13:49:54 compute-0 systemd[1]: libpod-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope: Consumed 1.256s CPU time.
Jan 21 13:49:54 compute-0 podman[108867]: 2026-01-21 13:49:54.00378777 +0000 UTC m=+0.978660811 container died 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:49:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f8867a0d3370718a180db39a058eadcc250b351f33e182b11caad456fd6de5-merged.mount: Deactivated successfully.
Jan 21 13:49:54 compute-0 podman[108867]: 2026-01-21 13:49:54.056331788 +0000 UTC m=+1.031204809 container remove 1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:49:54 compute-0 systemd[1]: libpod-conmon-1b2ca049b8d279ac6ae44116ab6d68ba3a349be742c11eb097dd6b4fb07302a9.scope: Deactivated successfully.
Jan 21 13:49:54 compute-0 sudo[108789]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:49:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:49:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:49:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:49:54 compute-0 sudo[109012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:49:54 compute-0 sudo[109012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:49:54 compute-0 sudo[109012]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:54 compute-0 ceph-mon[75031]: 8.d scrub starts
Jan 21 13:49:54 compute-0 ceph-mon[75031]: 8.d scrub ok
Jan 21 13:49:54 compute-0 ceph-mon[75031]: 11.6 scrub starts
Jan 21 13:49:54 compute-0 ceph-mon[75031]: 11.6 scrub ok
Jan 21 13:49:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:49:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:49:54 compute-0 sudo[109155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iowqesmityhcspejwlboclrkvdsfgoli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003394.1578734-154-266038078838563/AnsiballZ_stat.py'
Jan 21 13:49:54 compute-0 sudo[109155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:54 compute-0 python3.9[109157]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:49:54 compute-0 sudo[109155]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:55 compute-0 sudo[109309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uucrhfakcetxvjoosmrvjpjcfvsysdul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003394.9681249-162-172161548513819/AnsiballZ_slurp.py'
Jan 21 13:49:55 compute-0 sudo[109309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:49:55 compute-0 ceph-mon[75031]: pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:55 compute-0 python3.9[109311]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 21 13:49:55 compute-0 sudo[109309]: pam_unix(sudo:session): session closed for user root
Jan 21 13:49:56 compute-0 sshd-session[106029]: Connection closed by 192.168.122.30 port 44042
Jan 21 13:49:56 compute-0 sshd-session[106026]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:49:56 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 21 13:49:56 compute-0 systemd[1]: session-36.scope: Consumed 19.262s CPU time.
Jan 21 13:49:56 compute-0 systemd-logind[780]: Session 36 logged out. Waiting for processes to exit.
Jan 21 13:49:56 compute-0 systemd-logind[780]: Removed session 36.
Jan 21 13:49:56 compute-0 ceph-mon[75031]: pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:49:56 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 21 13:49:56 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 21 13:49:57 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 21 13:49:57 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 21 13:49:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:57 compute-0 ceph-mon[75031]: 10.19 scrub starts
Jan 21 13:49:57 compute-0 ceph-mon[75031]: 10.19 scrub ok
Jan 21 13:49:58 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 21 13:49:58 compute-0 ceph-mon[75031]: 8.e scrub starts
Jan 21 13:49:58 compute-0 ceph-mon[75031]: 8.e scrub ok
Jan 21 13:49:58 compute-0 ceph-mon[75031]: pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:58 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 21 13:49:59 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 21 13:49:59 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 21 13:49:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:49:59 compute-0 ceph-mon[75031]: 11.9 scrub starts
Jan 21 13:49:59 compute-0 ceph-mon[75031]: 11.9 scrub ok
Jan 21 13:49:59 compute-0 ceph-mon[75031]: 5.9 scrub starts
Jan 21 13:49:59 compute-0 ceph-mon[75031]: 5.9 scrub ok
Jan 21 13:50:00 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 21 13:50:00 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 21 13:50:00 compute-0 ceph-mon[75031]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:00 compute-0 ceph-mon[75031]: 10.6 scrub starts
Jan 21 13:50:00 compute-0 ceph-mon[75031]: 10.6 scrub ok
Jan 21 13:50:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:01 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 21 13:50:01 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 21 13:50:01 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 21 13:50:01 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 21 13:50:01 compute-0 sshd-session[109336]: Accepted publickey for zuul from 192.168.122.30 port 33510 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:50:01 compute-0 systemd-logind[780]: New session 37 of user zuul.
Jan 21 13:50:01 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 21 13:50:01 compute-0 sshd-session[109336]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:50:02 compute-0 ceph-mon[75031]: pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:02 compute-0 ceph-mon[75031]: 11.f scrub starts
Jan 21 13:50:02 compute-0 ceph-mon[75031]: 11.2 scrub starts
Jan 21 13:50:02 compute-0 ceph-mon[75031]: 11.f scrub ok
Jan 21 13:50:02 compute-0 ceph-mon[75031]: 11.2 scrub ok
Jan 21 13:50:02 compute-0 python3.9[109490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:50:03 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 21 13:50:03 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 21 13:50:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:03 compute-0 ceph-mon[75031]: 4.8 scrub starts
Jan 21 13:50:03 compute-0 ceph-mon[75031]: 4.8 scrub ok
Jan 21 13:50:03 compute-0 python3.9[109644]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:50:04 compute-0 ceph-mon[75031]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:05 compute-0 python3.9[109837]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:50:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:05 compute-0 sshd-session[109340]: Connection closed by 192.168.122.30 port 33510
Jan 21 13:50:05 compute-0 sshd-session[109336]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:50:05 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 21 13:50:05 compute-0 systemd[1]: session-37.scope: Consumed 2.785s CPU time.
Jan 21 13:50:05 compute-0 systemd-logind[780]: Session 37 logged out. Waiting for processes to exit.
Jan 21 13:50:05 compute-0 systemd-logind[780]: Removed session 37.
Jan 21 13:50:05 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 21 13:50:05 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 21 13:50:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:06 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 21 13:50:06 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 21 13:50:06 compute-0 ceph-mon[75031]: pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:06 compute-0 ceph-mon[75031]: 7.e scrub starts
Jan 21 13:50:06 compute-0 ceph-mon[75031]: 7.e scrub ok
Jan 21 13:50:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:07 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 21 13:50:07 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 21 13:50:07 compute-0 ceph-mon[75031]: 8.c scrub starts
Jan 21 13:50:07 compute-0 ceph-mon[75031]: 8.c scrub ok
Jan 21 13:50:08 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 21 13:50:08 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 21 13:50:08 compute-0 ceph-mon[75031]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:08 compute-0 ceph-mon[75031]: 7.3 scrub starts
Jan 21 13:50:08 compute-0 ceph-mon[75031]: 7.3 scrub ok
Jan 21 13:50:08 compute-0 ceph-mon[75031]: 5.f scrub starts
Jan 21 13:50:08 compute-0 ceph-mon[75031]: 5.f scrub ok
Jan 21 13:50:09 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 21 13:50:09 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 21 13:50:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:09 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 21 13:50:09 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 21 13:50:10 compute-0 ceph-mon[75031]: 2.d scrub starts
Jan 21 13:50:10 compute-0 ceph-mon[75031]: 2.d scrub ok
Jan 21 13:50:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 21 13:50:10 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 21 13:50:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 21 13:50:10 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 21 13:50:10 compute-0 sshd-session[109863]: Accepted publickey for zuul from 192.168.122.30 port 35244 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:50:10 compute-0 systemd-logind[780]: New session 38 of user zuul.
Jan 21 13:50:10 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 21 13:50:10 compute-0 sshd-session[109863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:50:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:50:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:50:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:50:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:50:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:50:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:50:11 compute-0 ceph-mon[75031]: pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:11 compute-0 ceph-mon[75031]: 7.f scrub starts
Jan 21 13:50:11 compute-0 ceph-mon[75031]: 7.f scrub ok
Jan 21 13:50:11 compute-0 ceph-mon[75031]: 3.6 scrub starts
Jan 21 13:50:11 compute-0 ceph-mon[75031]: 7.8 scrub starts
Jan 21 13:50:11 compute-0 ceph-mon[75031]: 3.6 scrub ok
Jan 21 13:50:11 compute-0 ceph-mon[75031]: 7.8 scrub ok
Jan 21 13:50:11 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 21 13:50:11 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 21 13:50:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:11 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 21 13:50:11 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 21 13:50:11 compute-0 python3.9[110016]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:50:12 compute-0 ceph-mon[75031]: 10.b scrub starts
Jan 21 13:50:12 compute-0 ceph-mon[75031]: 10.b scrub ok
Jan 21 13:50:12 compute-0 ceph-mon[75031]: 4.a scrub starts
Jan 21 13:50:12 compute-0 ceph-mon[75031]: 4.a scrub ok
Jan 21 13:50:12 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 21 13:50:12 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 21 13:50:12 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 21 13:50:12 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 21 13:50:12 compute-0 python3.9[110170]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:50:13 compute-0 ceph-mon[75031]: pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:13 compute-0 ceph-mon[75031]: 2.5 scrub starts
Jan 21 13:50:13 compute-0 ceph-mon[75031]: 2.5 scrub ok
Jan 21 13:50:13 compute-0 ceph-mon[75031]: 8.4 scrub starts
Jan 21 13:50:13 compute-0 ceph-mon[75031]: 8.4 scrub ok
Jan 21 13:50:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:13 compute-0 sudo[110324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fogzcivhckxinzwcgwluzukipgjacrvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003413.4303384-35-215149711658018/AnsiballZ_setup.py'
Jan 21 13:50:13 compute-0 sudo[110324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:14 compute-0 python3.9[110326]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:50:14 compute-0 sudo[110324]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:14 compute-0 sudo[110408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rghbcqqtgpfxxryboacedhksqakiwcxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003413.4303384-35-215149711658018/AnsiballZ_dnf.py'
Jan 21 13:50:14 compute-0 sudo[110408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:14 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 21 13:50:14 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 21 13:50:14 compute-0 python3.9[110410]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:50:15 compute-0 ceph-mon[75031]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:15 compute-0 ceph-mon[75031]: 7.a scrub starts
Jan 21 13:50:15 compute-0 ceph-mon[75031]: 7.a scrub ok
Jan 21 13:50:15 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 21 13:50:15 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 21 13:50:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:16 compute-0 sudo[110408]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:16 compute-0 ceph-mon[75031]: 10.2 scrub starts
Jan 21 13:50:16 compute-0 ceph-mon[75031]: 10.2 scrub ok
Jan 21 13:50:16 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 21 13:50:16 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 21 13:50:16 compute-0 sudo[110561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjtqmraicszqjyrxdkuezmsonnamgqsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003416.33399-47-10808837084618/AnsiballZ_setup.py'
Jan 21 13:50:16 compute-0 sudo[110561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:16 compute-0 python3.9[110563]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:50:17 compute-0 sudo[110561]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:17 compute-0 ceph-mon[75031]: pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:17 compute-0 ceph-mon[75031]: 2.3 scrub starts
Jan 21 13:50:17 compute-0 ceph-mon[75031]: 2.3 scrub ok
Jan 21 13:50:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 21 13:50:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 21 13:50:17 compute-0 sudo[110756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onslvqmrcglydiwnvxcvbvzwtwksrfkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003417.4312167-58-137349675764072/AnsiballZ_file.py'
Jan 21 13:50:17 compute-0 sudo[110756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:18 compute-0 python3.9[110758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:18 compute-0 sudo[110756]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:18 compute-0 ceph-mon[75031]: 3.a scrub starts
Jan 21 13:50:18 compute-0 ceph-mon[75031]: 3.a scrub ok
Jan 21 13:50:18 compute-0 sudo[110908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtubgbvixwhvrwrmatxxxuoxexqzmyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003418.2357974-66-96902078642954/AnsiballZ_command.py'
Jan 21 13:50:18 compute-0 sudo[110908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:18 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 21 13:50:18 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 21 13:50:18 compute-0 python3.9[110910]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:50:18 compute-0 sudo[110908]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:19 compute-0 ceph-mon[75031]: pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:19 compute-0 ceph-mon[75031]: 3.11 scrub starts
Jan 21 13:50:19 compute-0 ceph-mon[75031]: 3.11 scrub ok
Jan 21 13:50:19 compute-0 sudo[111073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfdnzncceiyknxzvyvjuddtjxiqdpqxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003419.1274054-74-15270458395708/AnsiballZ_stat.py'
Jan 21 13:50:19 compute-0 sudo[111073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:19 compute-0 python3.9[111075]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:50:19 compute-0 sudo[111073]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:19 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 21 13:50:19 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 21 13:50:20 compute-0 sudo[111151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbvenotvsfgxpdlfyzctjehebfchqvar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003419.1274054-74-15270458395708/AnsiballZ_file.py'
Jan 21 13:50:20 compute-0 sudo[111151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:20 compute-0 ceph-mon[75031]: 11.1 scrub starts
Jan 21 13:50:20 compute-0 ceph-mon[75031]: 11.1 scrub ok
Jan 21 13:50:20 compute-0 python3.9[111153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:20 compute-0 sudo[111151]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:20 compute-0 sudo[111303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebghsiiihpxsqnakcdoshkkdvxqztrwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003420.415827-86-288500182237/AnsiballZ_stat.py'
Jan 21 13:50:20 compute-0 sudo[111303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:20 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 21 13:50:20 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 21 13:50:20 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 21 13:50:20 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 21 13:50:20 compute-0 python3.9[111305]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:50:20 compute-0 sudo[111303]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:21 compute-0 sudo[111381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrkzvaecsmljrkaguhrhuzorplsljly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003420.415827-86-288500182237/AnsiballZ_file.py'
Jan 21 13:50:21 compute-0 sudo[111381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:21 compute-0 ceph-mon[75031]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:21 compute-0 ceph-mon[75031]: 7.15 scrub starts
Jan 21 13:50:21 compute-0 ceph-mon[75031]: 7.15 scrub ok
Jan 21 13:50:21 compute-0 ceph-mon[75031]: 3.15 scrub starts
Jan 21 13:50:21 compute-0 ceph-mon[75031]: 3.15 scrub ok
Jan 21 13:50:21 compute-0 python3.9[111383]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:50:21 compute-0 sudo[111381]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:21 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 21 13:50:21 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 21 13:50:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:21 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 21 13:50:21 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 21 13:50:21 compute-0 sudo[111533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctvdakdhglgrpxgzglbwauiofvmoekfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003421.5123842-99-175248983302521/AnsiballZ_ini_file.py'
Jan 21 13:50:21 compute-0 sudo[111533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:22 compute-0 python3.9[111535]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:50:22 compute-0 sudo[111533]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:22 compute-0 ceph-mon[75031]: 5.c scrub starts
Jan 21 13:50:22 compute-0 ceph-mon[75031]: 3.17 scrub starts
Jan 21 13:50:22 compute-0 ceph-mon[75031]: 3.17 scrub ok
Jan 21 13:50:22 compute-0 sudo[111685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaminbtbarpqxtxjguyhsjziqqpvcgmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003422.2923436-99-267522795564547/AnsiballZ_ini_file.py'
Jan 21 13:50:22 compute-0 sudo[111685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:22 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 21 13:50:22 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 21 13:50:22 compute-0 python3.9[111687]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:50:22 compute-0 sudo[111685]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:22 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 21 13:50:22 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 21 13:50:23 compute-0 sudo[111837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajsxfcqpbyheqafrquqkuyexyfmilwih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003422.9145467-99-146948542866118/AnsiballZ_ini_file.py'
Jan 21 13:50:23 compute-0 sudo[111837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:23 compute-0 ceph-mon[75031]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:23 compute-0 ceph-mon[75031]: 5.c scrub ok
Jan 21 13:50:23 compute-0 ceph-mon[75031]: 2.4 scrub starts
Jan 21 13:50:23 compute-0 ceph-mon[75031]: 2.4 scrub ok
Jan 21 13:50:23 compute-0 ceph-mon[75031]: 3.9 scrub starts
Jan 21 13:50:23 compute-0 ceph-mon[75031]: 3.9 scrub ok
Jan 21 13:50:23 compute-0 python3.9[111839]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:50:23 compute-0 sudo[111837]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:23 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 21 13:50:23 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 21 13:50:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 21 13:50:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 21 13:50:23 compute-0 sudo[111989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayuwjvubuxzzslfgxhpzysxvaqjbwaoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003423.5394592-99-38169352321312/AnsiballZ_ini_file.py'
Jan 21 13:50:23 compute-0 sudo[111989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:24 compute-0 python3.9[111991]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:50:24 compute-0 sudo[111989]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:24 compute-0 ceph-mon[75031]: 2.7 scrub starts
Jan 21 13:50:24 compute-0 ceph-mon[75031]: 2.7 scrub ok
Jan 21 13:50:24 compute-0 ceph-mon[75031]: 8.1d scrub starts
Jan 21 13:50:24 compute-0 ceph-mon[75031]: 8.1d scrub ok
Jan 21 13:50:24 compute-0 sudo[112141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfcuziwlyxunflpuxwabjkdltnzfiijk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003424.4024725-130-23482181874839/AnsiballZ_dnf.py'
Jan 21 13:50:24 compute-0 sudo[112141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:24 compute-0 python3.9[112143]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:50:25 compute-0 ceph-mon[75031]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:25 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 21 13:50:25 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 21 13:50:26 compute-0 sudo[112141]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:26 compute-0 ceph-mon[75031]: 11.19 scrub starts
Jan 21 13:50:26 compute-0 ceph-mon[75031]: 11.19 scrub ok
Jan 21 13:50:26 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 21 13:50:26 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 21 13:50:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:26 compute-0 sudo[112294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozquhjviptcfgdijiyzsuobccdqozlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003426.5121412-141-75721405427276/AnsiballZ_setup.py'
Jan 21 13:50:26 compute-0 sudo[112294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:27 compute-0 python3.9[112296]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:50:27 compute-0 sudo[112294]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:27 compute-0 ceph-mon[75031]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:27 compute-0 ceph-mon[75031]: 5.1 scrub starts
Jan 21 13:50:27 compute-0 ceph-mon[75031]: 5.1 scrub ok
Jan 21 13:50:27 compute-0 sudo[112448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osqoqlkpqfydqrbvovhcioltfjqcawvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003427.3313198-149-207628903114621/AnsiballZ_stat.py'
Jan 21 13:50:27 compute-0 sudo[112448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:27 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 21 13:50:27 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 21 13:50:27 compute-0 python3.9[112450]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:50:27 compute-0 sudo[112448]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:28 compute-0 ceph-mon[75031]: 8.1b scrub starts
Jan 21 13:50:28 compute-0 ceph-mon[75031]: 8.1b scrub ok
Jan 21 13:50:28 compute-0 sudo[112600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpqqhbdcrkyfnycfyzwszzjswizkokve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003428.125088-158-41068698158929/AnsiballZ_stat.py'
Jan 21 13:50:28 compute-0 sudo[112600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:28 compute-0 python3.9[112602]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:50:28 compute-0 sudo[112600]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:29 compute-0 sudo[112752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luynwobbsqmccwizeduqmfhyrnskthpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003428.9468534-168-149033653862466/AnsiballZ_command.py'
Jan 21 13:50:29 compute-0 sudo[112752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:29 compute-0 ceph-mon[75031]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:29 compute-0 python3.9[112754]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:50:29 compute-0 sudo[112752]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:30 compute-0 sudo[112905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrnfbmacizqbjuikurhhgmeqwkhbgce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003429.7809494-178-265580135007153/AnsiballZ_service_facts.py'
Jan 21 13:50:30 compute-0 sudo[112905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:30 compute-0 python3.9[112907]: ansible-service_facts Invoked
Jan 21 13:50:30 compute-0 network[112924]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 13:50:30 compute-0 network[112925]: 'network-scripts' will be removed from distribution in near future.
Jan 21 13:50:30 compute-0 network[112926]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 13:50:31 compute-0 ceph-mon[75031]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:31 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 21 13:50:31 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 21 13:50:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:32 compute-0 ceph-mon[75031]: 11.18 scrub starts
Jan 21 13:50:32 compute-0 ceph-mon[75031]: 11.18 scrub ok
Jan 21 13:50:33 compute-0 ceph-mon[75031]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:34 compute-0 sudo[112905]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:34 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 21 13:50:34 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 21 13:50:35 compute-0 sudo[113209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfobjkycnrwswfkunyevjdbhdyowilsn ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769003434.871602-193-70280091815123/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769003434.871602-193-70280091815123/args'
Jan 21 13:50:35 compute-0 sudo[113209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:35 compute-0 ceph-mon[75031]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:35 compute-0 ceph-mon[75031]: 10.10 scrub starts
Jan 21 13:50:35 compute-0 ceph-mon[75031]: 10.10 scrub ok
Jan 21 13:50:35 compute-0 sudo[113209]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:35 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 21 13:50:35 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 21 13:50:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:36 compute-0 sudo[113376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdxosyiorlkzahuretcatkkipiibwnic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003435.6973925-204-114092372790918/AnsiballZ_dnf.py'
Jan 21 13:50:36 compute-0 sudo[113376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:36 compute-0 python3.9[113378]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:50:36 compute-0 ceph-mon[75031]: 10.13 scrub starts
Jan 21 13:50:36 compute-0 ceph-mon[75031]: 10.13 scrub ok
Jan 21 13:50:36 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 21 13:50:36 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 21 13:50:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:36 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 21 13:50:36 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 21 13:50:37 compute-0 ceph-mon[75031]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:37 compute-0 ceph-mon[75031]: 5.1d scrub starts
Jan 21 13:50:37 compute-0 ceph-mon[75031]: 5.1d scrub ok
Jan 21 13:50:37 compute-0 ceph-mon[75031]: 3.12 scrub starts
Jan 21 13:50:37 compute-0 ceph-mon[75031]: 3.12 scrub ok
Jan 21 13:50:37 compute-0 sudo[113376]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:37 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 21 13:50:37 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 21 13:50:38 compute-0 ceph-mon[75031]: 8.1f scrub starts
Jan 21 13:50:38 compute-0 ceph-mon[75031]: 8.1f scrub ok
Jan 21 13:50:38 compute-0 sudo[113529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrylhdiujwptbabzzhbafgdbgmpkjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003437.7770274-217-85132869754434/AnsiballZ_package_facts.py'
Jan 21 13:50:38 compute-0 sudo[113529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:38 compute-0 python3.9[113531]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 21 13:50:38 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 21 13:50:38 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 21 13:50:38 compute-0 sudo[113529]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:39 compute-0 ceph-mon[75031]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:39 compute-0 ceph-mon[75031]: 11.1b scrub starts
Jan 21 13:50:39 compute-0 ceph-mon[75031]: 11.1b scrub ok
Jan 21 13:50:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:50:39
Jan 21 13:50:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:50:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:50:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'vms', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 21 13:50:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:50:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:39 compute-0 sudo[113681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yueyyimpftuniwqxjyoyhackctttjnhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003439.3323278-227-211883732440730/AnsiballZ_stat.py'
Jan 21 13:50:39 compute-0 sudo[113681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:39 compute-0 python3.9[113683]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:50:39 compute-0 sudo[113681]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:40 compute-0 sudo[113759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyplrdvqfnrngopraknbycmktpxztffs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003439.3323278-227-211883732440730/AnsiballZ_file.py'
Jan 21 13:50:40 compute-0 sudo[113759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:40 compute-0 python3.9[113761]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:40 compute-0 sudo[113759]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:40 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 21 13:50:40 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 21 13:50:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:50:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:50:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:50:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:50:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:50:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:50:41 compute-0 sudo[113911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcsxefdpyynfrcjbksfesswuapdyzhmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003440.692147-239-116736712346005/AnsiballZ_stat.py'
Jan 21 13:50:41 compute-0 sudo[113911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:41 compute-0 python3.9[113913]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:50:41 compute-0 sudo[113911]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:41 compute-0 ceph-mon[75031]: pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:41 compute-0 ceph-mon[75031]: 5.1a scrub starts
Jan 21 13:50:41 compute-0 ceph-mon[75031]: 5.1a scrub ok
Jan 21 13:50:41 compute-0 sudo[113989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiplxqdctltfikprlsszxbylffyyzbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003440.692147-239-116736712346005/AnsiballZ_file.py'
Jan 21 13:50:41 compute-0 sudo[113989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:41 compute-0 python3.9[113991]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:41 compute-0 sudo[113989]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:41 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 21 13:50:41 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 21 13:50:42 compute-0 ceph-mon[75031]: 11.e scrub starts
Jan 21 13:50:42 compute-0 ceph-mon[75031]: 11.e scrub ok
Jan 21 13:50:42 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 21 13:50:42 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 21 13:50:42 compute-0 sudo[114141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhsihmuxzbuqvfoqcwfcuviyispapnxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003442.1285484-257-84837009025324/AnsiballZ_lineinfile.py'
Jan 21 13:50:42 compute-0 sudo[114141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:42 compute-0 python3.9[114143]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:42 compute-0 sudo[114141]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:43 compute-0 ceph-mon[75031]: pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:43 compute-0 ceph-mon[75031]: 10.f scrub starts
Jan 21 13:50:43 compute-0 ceph-mon[75031]: 10.f scrub ok
Jan 21 13:50:43 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 21 13:50:43 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 21 13:50:43 compute-0 sudo[114293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwctigdiwjsyptnwdwmghgamavrwlwjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003443.3090353-272-112169071816996/AnsiballZ_setup.py'
Jan 21 13:50:43 compute-0 sudo[114293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:43 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 21 13:50:43 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 21 13:50:43 compute-0 python3.9[114295]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:50:44 compute-0 sudo[114293]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:44 compute-0 ceph-mon[75031]: 5.11 scrub starts
Jan 21 13:50:44 compute-0 ceph-mon[75031]: 5.11 scrub ok
Jan 21 13:50:44 compute-0 ceph-mon[75031]: 7.13 scrub starts
Jan 21 13:50:44 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 21 13:50:44 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 21 13:50:44 compute-0 sudo[114377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-motnfbxwmhhdbhigngmvmkqzoqepplez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003443.3090353-272-112169071816996/AnsiballZ_systemd.py'
Jan 21 13:50:44 compute-0 sudo[114377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:45 compute-0 python3.9[114379]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:50:45 compute-0 sudo[114377]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:45 compute-0 ceph-mon[75031]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:45 compute-0 ceph-mon[75031]: 7.13 scrub ok
Jan 21 13:50:45 compute-0 ceph-mon[75031]: 5.18 scrub starts
Jan 21 13:50:45 compute-0 ceph-mon[75031]: 5.18 scrub ok
Jan 21 13:50:45 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 21 13:50:45 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 21 13:50:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:45 compute-0 sshd-session[109866]: Connection closed by 192.168.122.30 port 35244
Jan 21 13:50:45 compute-0 sshd-session[109863]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:50:45 compute-0 systemd-logind[780]: Session 38 logged out. Waiting for processes to exit.
Jan 21 13:50:45 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 21 13:50:45 compute-0 systemd[1]: session-38.scope: Consumed 25.934s CPU time.
Jan 21 13:50:45 compute-0 systemd-logind[780]: Removed session 38.
Jan 21 13:50:46 compute-0 ceph-mon[75031]: 2.9 scrub starts
Jan 21 13:50:46 compute-0 ceph-mon[75031]: 2.9 scrub ok
Jan 21 13:50:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:47 compute-0 ceph-mon[75031]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:47 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 21 13:50:47 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 21 13:50:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:47 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 21 13:50:47 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 21 13:50:48 compute-0 ceph-mon[75031]: 5.19 scrub starts
Jan 21 13:50:48 compute-0 ceph-mon[75031]: 5.19 scrub ok
Jan 21 13:50:48 compute-0 ceph-mon[75031]: 3.e scrub starts
Jan 21 13:50:48 compute-0 ceph-mon[75031]: 3.e scrub ok
Jan 21 13:50:49 compute-0 ceph-mon[75031]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:49 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 21 13:50:49 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 21 13:50:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:50 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 21 13:50:50 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:50:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:50:50 compute-0 ceph-mon[75031]: 2.1b scrub starts
Jan 21 13:50:50 compute-0 ceph-mon[75031]: 2.1b scrub ok
Jan 21 13:50:50 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 21 13:50:50 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 21 13:50:51 compute-0 ceph-mon[75031]: pgmap v298: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:51 compute-0 ceph-mon[75031]: 7.1b scrub starts
Jan 21 13:50:51 compute-0 ceph-mon[75031]: 7.1b scrub ok
Jan 21 13:50:51 compute-0 ceph-mon[75031]: 7.11 scrub starts
Jan 21 13:50:51 compute-0 ceph-mon[75031]: 7.11 scrub ok
Jan 21 13:50:51 compute-0 sshd-session[114406]: Accepted publickey for zuul from 192.168.122.30 port 33752 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:50:51 compute-0 systemd-logind[780]: New session 39 of user zuul.
Jan 21 13:50:51 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 21 13:50:51 compute-0 sshd-session[114406]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:50:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 21 13:50:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 21 13:50:52 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 21 13:50:52 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 21 13:50:52 compute-0 sudo[114559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskvuliezvxulpwydqbumvgncuecxmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003451.6998525-17-44573899189824/AnsiballZ_file.py'
Jan 21 13:50:52 compute-0 sudo[114559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:52 compute-0 python3.9[114561]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:52 compute-0 sudo[114559]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:52 compute-0 ceph-mon[75031]: 11.1f scrub starts
Jan 21 13:50:52 compute-0 ceph-mon[75031]: 11.1f scrub ok
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.783512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003452783681, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7144, "num_deletes": 252, "total_data_size": 9893397, "memory_usage": 10080128, "flush_reason": "Manual Compaction"}
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003452860230, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7762988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7287, "table_properties": {"data_size": 7736694, "index_size": 17219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 73382, "raw_average_key_size": 23, "raw_value_size": 7675304, "raw_average_value_size": 2413, "num_data_blocks": 757, "num_entries": 3180, "num_filter_entries": 3180, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003061, "oldest_key_time": 1769003061, "file_creation_time": 1769003452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 76806 microseconds, and 27391 cpu microseconds.
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.860317) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7762988 bytes OK
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.860351) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.862064) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.862092) EVENT_LOG_v1 {"time_micros": 1769003452862084, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.862129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9862517, prev total WAL file size 9862517, number of live WAL files 2.
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:50:52 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.866004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7581KB) 13(58KB) 8(1944B)]
Jan 21 13:50:52 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003452866729, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7824892, "oldest_snapshot_seqno": -1}
Jan 21 13:50:52 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 21 13:50:53 compute-0 sudo[114712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbelfuojynltxcgdeinbiabwdhqxjlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003452.5726554-29-98344409495367/AnsiballZ_stat.py'
Jan 21 13:50:53 compute-0 sudo[114712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3005 keys, 7777743 bytes, temperature: kUnknown
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453103928, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7777743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7751863, "index_size": 17258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7557, "raw_key_size": 71785, "raw_average_key_size": 23, "raw_value_size": 7691824, "raw_average_value_size": 2559, "num_data_blocks": 760, "num_entries": 3005, "num_filter_entries": 3005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:53.104197) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7777743 bytes
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:53.106973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.0 rd, 32.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3295, records dropped: 290 output_compression: NoCompression
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:53.106995) EVENT_LOG_v1 {"time_micros": 1769003453106984, "job": 4, "event": "compaction_finished", "compaction_time_micros": 237019, "compaction_time_cpu_micros": 31843, "output_level": 6, "num_output_files": 1, "total_output_size": 7777743, "num_input_records": 3295, "num_output_records": 3005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453108948, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453109076, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003453109203, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 21 13:50:53 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:50:52.865797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:50:53 compute-0 python3.9[114714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:50:53 compute-0 sudo[114712]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:53 compute-0 sudo[114790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygjksobicksdsgpfaqdhvjjbstndxko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003452.5726554-29-98344409495367/AnsiballZ_file.py'
Jan 21 13:50:53 compute-0 sudo[114790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:50:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:53 compute-0 ceph-mon[75031]: pgmap v299: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:53 compute-0 ceph-mon[75031]: 8.1a scrub starts
Jan 21 13:50:53 compute-0 ceph-mon[75031]: 8.1a scrub ok
Jan 21 13:50:53 compute-0 ceph-mon[75031]: 11.1a scrub starts
Jan 21 13:50:53 compute-0 ceph-mon[75031]: 11.1a scrub ok
Jan 21 13:50:53 compute-0 python3.9[114792]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:50:53 compute-0 sudo[114790]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:54 compute-0 sshd-session[114409]: Connection closed by 192.168.122.30 port 33752
Jan 21 13:50:54 compute-0 sshd-session[114406]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:50:54 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 21 13:50:54 compute-0 systemd[1]: session-39.scope: Consumed 1.693s CPU time.
Jan 21 13:50:54 compute-0 systemd-logind[780]: Session 39 logged out. Waiting for processes to exit.
Jan 21 13:50:54 compute-0 systemd-logind[780]: Removed session 39.
Jan 21 13:50:54 compute-0 sudo[114818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:50:54 compute-0 sudo[114818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:54 compute-0 sudo[114818]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:54 compute-0 sudo[114843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:50:54 compute-0 sudo[114843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:54 compute-0 ceph-mon[75031]: pgmap v300: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:54 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 21 13:50:54 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 21 13:50:54 compute-0 sudo[114843]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:50:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:50:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:50:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:50:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:50:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:50:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:50:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:50:55 compute-0 sudo[114899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:50:55 compute-0 sudo[114899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:55 compute-0 sudo[114899]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:55 compute-0 sudo[114924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:50:55 compute-0 sudo[114924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.527033446 +0000 UTC m=+0.049812150 container create 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:50:55 compute-0 systemd[1]: Started libpod-conmon-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope.
Jan 21 13:50:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.504173495 +0000 UTC m=+0.026952219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.619537761 +0000 UTC m=+0.142316535 container init 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.627678076 +0000 UTC m=+0.150456760 container start 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.631630251 +0000 UTC m=+0.154408945 container attach 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:50:55 compute-0 gifted_montalcini[114977]: 167 167
Jan 21 13:50:55 compute-0 systemd[1]: libpod-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope: Deactivated successfully.
Jan 21 13:50:55 compute-0 conmon[114977]: conmon 9ca2e5206b555162dade <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope/container/memory.events
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.637832241 +0000 UTC m=+0.160610925 container died 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 13:50:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-308508bd4e8dd9fd84e0b3aa2b6b65f94655a8cd89ecc0ab28bc6f59f66623f8-merged.mount: Deactivated successfully.
Jan 21 13:50:55 compute-0 podman[114961]: 2026-01-21 13:50:55.677015443 +0000 UTC m=+0.199794117 container remove 9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:50:55 compute-0 systemd[1]: libpod-conmon-9ca2e5206b555162dadefbda0ff0ec9fa876e68f0e76ebcab53d4e9fc8f1656d.scope: Deactivated successfully.
Jan 21 13:50:55 compute-0 ceph-mon[75031]: 11.1c scrub starts
Jan 21 13:50:55 compute-0 ceph-mon[75031]: 11.1c scrub ok
Jan 21 13:50:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:50:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:50:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:50:55 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 21 13:50:55 compute-0 podman[115000]: 2026-01-21 13:50:55.873057458 +0000 UTC m=+0.067677279 container create c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:50:55 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 21 13:50:55 compute-0 systemd[1]: Started libpod-conmon-c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2.scope.
Jan 21 13:50:55 compute-0 podman[115000]: 2026-01-21 13:50:55.84697278 +0000 UTC m=+0.041592641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:50:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:55 compute-0 podman[115000]: 2026-01-21 13:50:55.982023708 +0000 UTC m=+0.176643539 container init c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:50:55 compute-0 podman[115000]: 2026-01-21 13:50:55.997731577 +0000 UTC m=+0.192351398 container start c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:50:56 compute-0 podman[115000]: 2026-01-21 13:50:56.002355918 +0000 UTC m=+0.196975789 container attach c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:50:56 compute-0 blissful_mayer[115017]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:50:56 compute-0 blissful_mayer[115017]: --> All data devices are unavailable
Jan 21 13:50:56 compute-0 systemd[1]: libpod-c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2.scope: Deactivated successfully.
Jan 21 13:50:56 compute-0 podman[115000]: 2026-01-21 13:50:56.584205112 +0000 UTC m=+0.778824933 container died c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 13:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1347dae2acf2f188ac74590f4906d72a1fa51ec608d6827c80d637b727d903d-merged.mount: Deactivated successfully.
Jan 21 13:50:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 21 13:50:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 21 13:50:56 compute-0 podman[115000]: 2026-01-21 13:50:56.651337077 +0000 UTC m=+0.845956868 container remove c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:50:56 compute-0 systemd[1]: libpod-conmon-c950ec386773edf96d8532f5a7507c3e726bcfd8bbb6abce36710f16ce4f71f2.scope: Deactivated successfully.
Jan 21 13:50:56 compute-0 sudo[114924]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:50:56 compute-0 sudo[115051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:50:56 compute-0 sudo[115051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:56 compute-0 sudo[115051]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:56 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 21 13:50:56 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 21 13:50:56 compute-0 sudo[115076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:50:56 compute-0 sudo[115076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:57 compute-0 ceph-mon[75031]: pgmap v301: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:57 compute-0 ceph-mon[75031]: 8.1c scrub starts
Jan 21 13:50:57 compute-0 ceph-mon[75031]: 8.1c scrub ok
Jan 21 13:50:57 compute-0 ceph-mon[75031]: 4.2 scrub starts
Jan 21 13:50:57 compute-0 ceph-mon[75031]: 4.2 scrub ok
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.218257941 +0000 UTC m=+0.053605870 container create df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 13:50:57 compute-0 systemd[1]: Started libpod-conmon-df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81.scope.
Jan 21 13:50:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.191866767 +0000 UTC m=+0.027214786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.301698838 +0000 UTC m=+0.137046787 container init df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.30718934 +0000 UTC m=+0.142537299 container start df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.312104468 +0000 UTC m=+0.147452427 container attach df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 13:50:57 compute-0 zen_ride[115130]: 167 167
Jan 21 13:50:57 compute-0 systemd[1]: libpod-df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81.scope: Deactivated successfully.
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.315160953 +0000 UTC m=+0.150508882 container died df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-14639df35d5313158c2ad933d4bd235a31c34b33962ce03cccf7f9c1e7d648ea-merged.mount: Deactivated successfully.
Jan 21 13:50:57 compute-0 podman[115113]: 2026-01-21 13:50:57.367854129 +0000 UTC m=+0.203202098 container remove df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 13:50:57 compute-0 systemd[1]: libpod-conmon-df2d4df8965792bfccce15843521303866105527e21dd5875ce98698f9086f81.scope: Deactivated successfully.
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.529535088 +0000 UTC m=+0.049328458 container create 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:50:57 compute-0 systemd[1]: Started libpod-conmon-21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8.scope.
Jan 21 13:50:57 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 21 13:50:57 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 21 13:50:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.509325992 +0000 UTC m=+0.029119392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.608337673 +0000 UTC m=+0.128131043 container init 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.614540362 +0000 UTC m=+0.134333732 container start 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.618014216 +0000 UTC m=+0.137807636 container attach 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 13:50:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]: {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:     "0": [
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:         {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "devices": [
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "/dev/loop3"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             ],
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_name": "ceph_lv0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_size": "21470642176",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "name": "ceph_lv0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "tags": {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cluster_name": "ceph",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.crush_device_class": "",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.encrypted": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.objectstore": "bluestore",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osd_id": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.type": "block",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.vdo": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.with_tpm": "0"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             },
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "type": "block",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "vg_name": "ceph_vg0"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:         }
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:     ],
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:     "1": [
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:         {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "devices": [
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "/dev/loop4"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             ],
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_name": "ceph_lv1",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_size": "21470642176",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "name": "ceph_lv1",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "tags": {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cluster_name": "ceph",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.crush_device_class": "",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.encrypted": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.objectstore": "bluestore",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osd_id": "1",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.type": "block",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.vdo": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.with_tpm": "0"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             },
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "type": "block",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "vg_name": "ceph_vg1"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:         }
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:     ],
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:     "2": [
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:         {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "devices": [
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "/dev/loop5"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             ],
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_name": "ceph_lv2",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_size": "21470642176",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "name": "ceph_lv2",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "tags": {
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.cluster_name": "ceph",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.crush_device_class": "",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.encrypted": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.objectstore": "bluestore",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osd_id": "2",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.type": "block",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.vdo": "0",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:                 "ceph.with_tpm": "0"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             },
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "type": "block",
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:             "vg_name": "ceph_vg2"
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:         }
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]:     ]
Jan 21 13:50:57 compute-0 thirsty_tharp[115171]: }
Jan 21 13:50:57 compute-0 systemd[1]: libpod-21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8.scope: Deactivated successfully.
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.928141745 +0000 UTC m=+0.447935115 container died 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 13:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-da6e298a7c1820f226852f941cb6c7e29e1bab44ea953b342fea579ced02387f-merged.mount: Deactivated successfully.
Jan 21 13:50:57 compute-0 podman[115154]: 2026-01-21 13:50:57.96909322 +0000 UTC m=+0.488886590 container remove 21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:50:57 compute-0 systemd[1]: libpod-conmon-21a6d42aa7b9e3581c6b699c60c8225d150b591ae9025e1e26d7280377cacda8.scope: Deactivated successfully.
Jan 21 13:50:58 compute-0 sudo[115076]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:58 compute-0 ceph-mon[75031]: 4.13 scrub starts
Jan 21 13:50:58 compute-0 ceph-mon[75031]: 4.13 scrub ok
Jan 21 13:50:58 compute-0 ceph-mon[75031]: 10.11 scrub starts
Jan 21 13:50:58 compute-0 ceph-mon[75031]: 10.11 scrub ok
Jan 21 13:50:58 compute-0 sudo[115194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:50:58 compute-0 sudo[115194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:58 compute-0 sudo[115194]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:58 compute-0 sudo[115219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:50:58 compute-0 sudo[115219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.474879865 +0000 UTC m=+0.055134877 container create 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:50:58 compute-0 systemd[1]: Started libpod-conmon-76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43.scope.
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.446893572 +0000 UTC m=+0.027148664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:50:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.554443779 +0000 UTC m=+0.134698791 container init 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.561238652 +0000 UTC m=+0.141493654 container start 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.565044263 +0000 UTC m=+0.145299295 container attach 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:50:58 compute-0 fervent_cohen[115273]: 167 167
Jan 21 13:50:58 compute-0 systemd[1]: libpod-76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43.scope: Deactivated successfully.
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.570214208 +0000 UTC m=+0.150469250 container died 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9443f10b03f70a372327ac87c3d0357dbfad873d85986a9fce52b9ee9dc6e01e-merged.mount: Deactivated successfully.
Jan 21 13:50:58 compute-0 podman[115256]: 2026-01-21 13:50:58.619999616 +0000 UTC m=+0.200254648 container remove 76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:50:58 compute-0 systemd[1]: libpod-conmon-76b6c3ac466b87e12448a559f344175519cbba98636889be34f2c9ec5c9c8b43.scope: Deactivated successfully.
Jan 21 13:50:58 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 21 13:50:58 compute-0 podman[115297]: 2026-01-21 13:50:58.838268664 +0000 UTC m=+0.057477143 container create 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:50:58 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 21 13:50:58 compute-0 systemd[1]: Started libpod-conmon-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope.
Jan 21 13:50:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:58 compute-0 podman[115297]: 2026-01-21 13:50:58.823018498 +0000 UTC m=+0.042227007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:50:58 compute-0 podman[115297]: 2026-01-21 13:50:58.939228923 +0000 UTC m=+0.158437442 container init 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:50:58 compute-0 podman[115297]: 2026-01-21 13:50:58.948877964 +0000 UTC m=+0.168086473 container start 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:50:58 compute-0 podman[115297]: 2026-01-21 13:50:58.953481406 +0000 UTC m=+0.172689925 container attach 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:50:59 compute-0 ceph-mon[75031]: pgmap v302: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:59 compute-0 sshd-session[115364]: Accepted publickey for zuul from 192.168.122.30 port 40694 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:50:59 compute-0 systemd-logind[780]: New session 40 of user zuul.
Jan 21 13:50:59 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 21 13:50:59 compute-0 sshd-session[115364]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:50:59 compute-0 lvm[115416]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:50:59 compute-0 lvm[115416]: VG ceph_vg1 finished
Jan 21 13:50:59 compute-0 lvm[115412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:50:59 compute-0 lvm[115412]: VG ceph_vg0 finished
Jan 21 13:50:59 compute-0 lvm[115421]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:50:59 compute-0 lvm[115421]: VG ceph_vg2 finished
Jan 21 13:50:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:50:59 compute-0 lvm[115444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:50:59 compute-0 lvm[115444]: VG ceph_vg2 finished
Jan 21 13:50:59 compute-0 hopeful_williamson[115313]: {}
Jan 21 13:50:59 compute-0 systemd[1]: libpod-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope: Deactivated successfully.
Jan 21 13:50:59 compute-0 systemd[1]: libpod-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope: Consumed 1.261s CPU time.
Jan 21 13:50:59 compute-0 podman[115297]: 2026-01-21 13:50:59.768394325 +0000 UTC m=+0.987602844 container died 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:50:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-99b1ce0906736a284b8b473fb1989e1bbb4d7118c1e8cc0380406568947631ad-merged.mount: Deactivated successfully.
Jan 21 13:50:59 compute-0 podman[115297]: 2026-01-21 13:50:59.826443881 +0000 UTC m=+1.045652380 container remove 139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_williamson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 13:50:59 compute-0 systemd[1]: libpod-conmon-139672ae7a7a2c6cf3287dff1226bbee754e35804592a94656737dfb60dfc8ef.scope: Deactivated successfully.
Jan 21 13:50:59 compute-0 sudo[115219]: pam_unix(sudo:session): session closed for user root
Jan 21 13:50:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:50:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:50:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:50:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:51:00 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 21 13:51:00 compute-0 sudo[115466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:51:00 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 21 13:51:00 compute-0 sudo[115466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:51:00 compute-0 sudo[115466]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:00 compute-0 ceph-mon[75031]: 3.16 scrub starts
Jan 21 13:51:00 compute-0 ceph-mon[75031]: 3.16 scrub ok
Jan 21 13:51:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:51:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:51:00 compute-0 python3.9[115588]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:51:00 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 21 13:51:00 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 21 13:51:01 compute-0 ceph-mon[75031]: pgmap v303: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:01 compute-0 ceph-mon[75031]: 8.14 scrub starts
Jan 21 13:51:01 compute-0 ceph-mon[75031]: 8.14 scrub ok
Jan 21 13:51:01 compute-0 sudo[115742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxokpqffromficdbdcmxhmrrcqdiadki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003461.0426338-28-324287996350/AnsiballZ_file.py'
Jan 21 13:51:01 compute-0 sudo[115742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:01 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 21 13:51:01 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 21 13:51:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:01 compute-0 python3.9[115744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:01 compute-0 sudo[115742]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:02 compute-0 ceph-mon[75031]: 8.18 scrub starts
Jan 21 13:51:02 compute-0 ceph-mon[75031]: 8.18 scrub ok
Jan 21 13:51:02 compute-0 ceph-mon[75031]: 4.4 scrub starts
Jan 21 13:51:02 compute-0 ceph-mon[75031]: 4.4 scrub ok
Jan 21 13:51:02 compute-0 sudo[115917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzxucoeqahfaftvajfikrdkdtxgkgzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003461.9740267-36-6545917779190/AnsiballZ_stat.py'
Jan 21 13:51:02 compute-0 sudo[115917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:02 compute-0 python3.9[115919]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:02 compute-0 sudo[115917]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:02 compute-0 sudo[115995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzdrquvtsdfgiojzqsmzlcjymbqaxcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003461.9740267-36-6545917779190/AnsiballZ_file.py'
Jan 21 13:51:02 compute-0 sudo[115995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:03 compute-0 ceph-mon[75031]: pgmap v304: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:03 compute-0 python3.9[115997]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.e63auofc recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:03 compute-0 sudo[115995]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:03 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 21 13:51:03 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 21 13:51:03 compute-0 sudo[116147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhgaygcdzmndnonrirpiugdqhhflrztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003463.5577586-56-241173808209369/AnsiballZ_stat.py'
Jan 21 13:51:03 compute-0 sudo[116147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:04 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 21 13:51:04 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 21 13:51:04 compute-0 python3.9[116149]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:04 compute-0 sudo[116147]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:04 compute-0 sudo[116225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outzejzvypbjtnrldaayjrtdhbhxyurn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003463.5577586-56-241173808209369/AnsiballZ_file.py'
Jan 21 13:51:04 compute-0 sudo[116225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:04 compute-0 python3.9[116227]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.oajyrk3w recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:04 compute-0 sudo[116225]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:05 compute-0 ceph-mon[75031]: pgmap v305: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:05 compute-0 ceph-mon[75031]: 11.11 scrub starts
Jan 21 13:51:05 compute-0 ceph-mon[75031]: 11.11 scrub ok
Jan 21 13:51:05 compute-0 ceph-mon[75031]: 11.17 scrub starts
Jan 21 13:51:05 compute-0 ceph-mon[75031]: 11.17 scrub ok
Jan 21 13:51:05 compute-0 sudo[116377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvqbqtppaqwglqrykulmunvphqwrylk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003464.7876244-69-15548490473804/AnsiballZ_file.py'
Jan 21 13:51:05 compute-0 sudo[116377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:05 compute-0 python3.9[116379]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:51:05 compute-0 sudo[116377]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:05 compute-0 sudo[116529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrvqsmjojizsfvszwgtqmltsymkynot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003465.5319939-77-77841299126359/AnsiballZ_stat.py'
Jan 21 13:51:05 compute-0 sudo[116529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:06 compute-0 python3.9[116531]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:06 compute-0 sudo[116529]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:06 compute-0 sudo[116607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzzszgxuxpbpxwrltqbysuzipjvqhyyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003465.5319939-77-77841299126359/AnsiballZ_file.py'
Jan 21 13:51:06 compute-0 sudo[116607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:06 compute-0 python3.9[116609]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:51:06 compute-0 sudo[116607]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:07 compute-0 sudo[116759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrmbanytiyxulgcaslubpgawjambglla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003466.7247589-77-197277304262922/AnsiballZ_stat.py'
Jan 21 13:51:07 compute-0 sudo[116759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:07 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 21 13:51:07 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 21 13:51:07 compute-0 ceph-mon[75031]: pgmap v306: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:07 compute-0 python3.9[116761]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:07 compute-0 sudo[116759]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:07 compute-0 sudo[116837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpxwdxazpurdyagthdmovpfjfmutctiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003466.7247589-77-197277304262922/AnsiballZ_file.py'
Jan 21 13:51:07 compute-0 sudo[116837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:07 compute-0 python3.9[116839]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:51:07 compute-0 sudo[116837]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:08 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 21 13:51:08 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 21 13:51:08 compute-0 ceph-mon[75031]: 10.e scrub starts
Jan 21 13:51:08 compute-0 ceph-mon[75031]: 10.e scrub ok
Jan 21 13:51:08 compute-0 sudo[116989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stfovmaqvlywsftdcxdxnactdufomxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003468.0473359-100-60353588083754/AnsiballZ_file.py'
Jan 21 13:51:08 compute-0 sudo[116989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:08 compute-0 python3.9[116991]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:08 compute-0 sudo[116989]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:09 compute-0 sudo[117141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loinaoahjljyiaskkywsenlamajtuxoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003468.7498217-108-82674919462043/AnsiballZ_stat.py'
Jan 21 13:51:09 compute-0 sudo[117141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:09 compute-0 ceph-mon[75031]: pgmap v307: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:09 compute-0 ceph-mon[75031]: 10.d scrub starts
Jan 21 13:51:09 compute-0 ceph-mon[75031]: 10.d scrub ok
Jan 21 13:51:09 compute-0 python3.9[117143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:09 compute-0 sudo[117141]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:09 compute-0 sudo[117219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqbztmwshincxswldokwmsdehblzpaqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003468.7498217-108-82674919462043/AnsiballZ_file.py'
Jan 21 13:51:09 compute-0 sudo[117219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:09 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 21 13:51:09 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 21 13:51:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:09 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 21 13:51:09 compute-0 python3.9[117221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:09 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 21 13:51:09 compute-0 sudo[117219]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:10 compute-0 ceph-mon[75031]: 4.f scrub starts
Jan 21 13:51:10 compute-0 ceph-mon[75031]: 4.f scrub ok
Jan 21 13:51:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 21 13:51:10 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 21 13:51:10 compute-0 sudo[117371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqeyumxkyygiuvdwldfqbnoifgrelerp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003470.011685-120-242131152666624/AnsiballZ_stat.py'
Jan 21 13:51:10 compute-0 sudo[117371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:10 compute-0 python3.9[117373]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:10 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 21 13:51:10 compute-0 sudo[117371]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:10 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 21 13:51:10 compute-0 sudo[117449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifyiwxorjsxskxpvchnlezoiqxhtdcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003470.011685-120-242131152666624/AnsiballZ_file.py'
Jan 21 13:51:10 compute-0 sudo[117449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:51:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:51:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:51:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:51:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:51:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:51:11 compute-0 python3.9[117451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:11 compute-0 sudo[117449]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:11 compute-0 ceph-mon[75031]: pgmap v308: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:11 compute-0 ceph-mon[75031]: 8.12 scrub starts
Jan 21 13:51:11 compute-0 ceph-mon[75031]: 8.12 scrub ok
Jan 21 13:51:11 compute-0 ceph-mon[75031]: 10.15 scrub starts
Jan 21 13:51:11 compute-0 ceph-mon[75031]: 10.15 scrub ok
Jan 21 13:51:11 compute-0 ceph-mon[75031]: 4.d scrub starts
Jan 21 13:51:11 compute-0 ceph-mon[75031]: 4.d scrub ok
Jan 21 13:51:11 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 21 13:51:11 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 21 13:51:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:11 compute-0 sudo[117601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjymnygolcpqbdcpdwejhzalupgjtrkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003471.3024437-132-6450211959099/AnsiballZ_systemd.py'
Jan 21 13:51:11 compute-0 sudo[117601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:12 compute-0 ceph-mon[75031]: 4.7 scrub starts
Jan 21 13:51:12 compute-0 ceph-mon[75031]: 4.7 scrub ok
Jan 21 13:51:12 compute-0 python3.9[117603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:51:12 compute-0 systemd[1]: Reloading.
Jan 21 13:51:12 compute-0 systemd-rc-local-generator[117628]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:51:12 compute-0 systemd-sysv-generator[117633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:51:12 compute-0 sudo[117601]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:13 compute-0 ceph-mon[75031]: pgmap v309: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:13 compute-0 sudo[117790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okpsnyrsdadzczowurcsudyfhygkwlbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003472.851766-140-228639353679758/AnsiballZ_stat.py'
Jan 21 13:51:13 compute-0 sudo[117790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:13 compute-0 python3.9[117792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:13 compute-0 sudo[117790]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:13 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 21 13:51:13 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 21 13:51:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:13 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 21 13:51:13 compute-0 sudo[117868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmhmwapobaoplkqqtylynivuafrxhqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003472.851766-140-228639353679758/AnsiballZ_file.py'
Jan 21 13:51:13 compute-0 sudo[117868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:13 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 21 13:51:13 compute-0 python3.9[117870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:13 compute-0 sudo[117868]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:14 compute-0 ceph-mon[75031]: 4.5 scrub starts
Jan 21 13:51:14 compute-0 ceph-mon[75031]: 4.5 scrub ok
Jan 21 13:51:14 compute-0 ceph-mon[75031]: 11.1e scrub starts
Jan 21 13:51:14 compute-0 ceph-mon[75031]: 11.1e scrub ok
Jan 21 13:51:14 compute-0 sudo[118020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqkvsmsgusbyuogepvmgdcyuddzxfcxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003474.158787-152-173503058961434/AnsiballZ_stat.py'
Jan 21 13:51:14 compute-0 sudo[118020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:14 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 21 13:51:14 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 21 13:51:14 compute-0 python3.9[118022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:14 compute-0 sudo[118020]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:15 compute-0 sudo[118098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjrviqguajaihpkzzrjaerjspjfkjvkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003474.158787-152-173503058961434/AnsiballZ_file.py'
Jan 21 13:51:15 compute-0 sudo[118098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:15 compute-0 ceph-mon[75031]: pgmap v310: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:15 compute-0 ceph-mon[75031]: 3.18 scrub starts
Jan 21 13:51:15 compute-0 ceph-mon[75031]: 3.18 scrub ok
Jan 21 13:51:15 compute-0 python3.9[118100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:15 compute-0 sudo[118098]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:15 compute-0 sudo[118250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibakhmyjgjsifzrmtppcpliutdrqyped ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003475.547238-164-57000705029301/AnsiballZ_systemd.py'
Jan 21 13:51:15 compute-0 sudo[118250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:16 compute-0 python3.9[118252]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:51:16 compute-0 systemd[1]: Reloading.
Jan 21 13:51:16 compute-0 systemd-rc-local-generator[118278]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:51:16 compute-0 systemd-sysv-generator[118283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:51:16 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 13:51:16 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 21 13:51:16 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 21 13:51:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 13:51:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 13:51:16 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 13:51:16 compute-0 sudo[118250]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 21 13:51:17 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 21 13:51:17 compute-0 ceph-mon[75031]: pgmap v311: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:17 compute-0 ceph-mon[75031]: 4.14 scrub starts
Jan 21 13:51:17 compute-0 ceph-mon[75031]: 4.14 scrub ok
Jan 21 13:51:17 compute-0 python3.9[118443]: ansible-ansible.builtin.service_facts Invoked
Jan 21 13:51:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:17 compute-0 network[118460]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 13:51:17 compute-0 network[118461]: 'network-scripts' will be removed from distribution in near future.
Jan 21 13:51:17 compute-0 network[118462]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 13:51:17 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 21 13:51:17 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 21 13:51:18 compute-0 ceph-mon[75031]: 10.9 scrub starts
Jan 21 13:51:18 compute-0 ceph-mon[75031]: 10.9 scrub ok
Jan 21 13:51:18 compute-0 ceph-mon[75031]: 4.11 scrub starts
Jan 21 13:51:18 compute-0 ceph-mon[75031]: 4.11 scrub ok
Jan 21 13:51:18 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 21 13:51:18 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 21 13:51:19 compute-0 ceph-mon[75031]: pgmap v312: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:19 compute-0 ceph-mon[75031]: 4.12 scrub starts
Jan 21 13:51:19 compute-0 ceph-mon[75031]: 4.12 scrub ok
Jan 21 13:51:19 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 21 13:51:19 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 21 13:51:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:20 compute-0 ceph-mon[75031]: 4.9 scrub starts
Jan 21 13:51:20 compute-0 ceph-mon[75031]: 4.9 scrub ok
Jan 21 13:51:20 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 21 13:51:20 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 21 13:51:21 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 21 13:51:21 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 21 13:51:21 compute-0 ceph-mon[75031]: pgmap v313: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:21 compute-0 ceph-mon[75031]: 7.1c scrub starts
Jan 21 13:51:21 compute-0 ceph-mon[75031]: 7.1c scrub ok
Jan 21 13:51:21 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 21 13:51:21 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 21 13:51:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:22 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 21 13:51:22 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 21 13:51:22 compute-0 sudo[118722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twutvdkiyaalpmfnelojtcukqghyruuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003481.8227346-190-63624096510429/AnsiballZ_stat.py'
Jan 21 13:51:22 compute-0 sudo[118722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:22 compute-0 ceph-mon[75031]: 8.f scrub starts
Jan 21 13:51:22 compute-0 ceph-mon[75031]: 8.f scrub ok
Jan 21 13:51:22 compute-0 ceph-mon[75031]: 4.10 scrub starts
Jan 21 13:51:22 compute-0 ceph-mon[75031]: 4.10 scrub ok
Jan 21 13:51:22 compute-0 python3.9[118724]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:22 compute-0 sudo[118722]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:22 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 21 13:51:22 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 21 13:51:22 compute-0 sudo[118800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexrokxippaqbfqqubbfkqnnvqcthjif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003481.8227346-190-63624096510429/AnsiballZ_file.py'
Jan 21 13:51:22 compute-0 sudo[118800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:23 compute-0 python3.9[118802]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:23 compute-0 sudo[118800]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 21 13:51:23 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 21 13:51:23 compute-0 ceph-mon[75031]: pgmap v314: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:23 compute-0 ceph-mon[75031]: 8.6 scrub starts
Jan 21 13:51:23 compute-0 ceph-mon[75031]: 8.6 scrub ok
Jan 21 13:51:23 compute-0 ceph-mon[75031]: 6.8 scrub starts
Jan 21 13:51:23 compute-0 ceph-mon[75031]: 6.8 scrub ok
Jan 21 13:51:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:23 compute-0 sudo[118952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikjsiomnixrvrblmlgghprazktnmmxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003483.3359766-203-210555526124835/AnsiballZ_file.py'
Jan 21 13:51:23 compute-0 sudo[118952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:23 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 21 13:51:23 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 21 13:51:23 compute-0 python3.9[118954]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:23 compute-0 sudo[118952]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:24 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 21 13:51:24 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 21 13:51:24 compute-0 ceph-mon[75031]: 6.a scrub starts
Jan 21 13:51:24 compute-0 ceph-mon[75031]: 6.a scrub ok
Jan 21 13:51:24 compute-0 ceph-mon[75031]: 6.f scrub starts
Jan 21 13:51:24 compute-0 ceph-mon[75031]: 6.f scrub ok
Jan 21 13:51:24 compute-0 sudo[119104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwofgatomclvzowaqvormktgvyqgtggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003484.0814762-211-121519074552603/AnsiballZ_stat.py'
Jan 21 13:51:24 compute-0 sudo[119104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:24 compute-0 python3.9[119106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:24 compute-0 sudo[119104]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:24 compute-0 sudo[119182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfjnjdmjnglwqctdczparphweeinrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003484.0814762-211-121519074552603/AnsiballZ_file.py'
Jan 21 13:51:24 compute-0 sudo[119182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:25 compute-0 python3.9[119184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:25 compute-0 sudo[119182]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:25 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 21 13:51:25 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 21 13:51:25 compute-0 ceph-mon[75031]: pgmap v315: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:25 compute-0 ceph-mon[75031]: 6.5 scrub starts
Jan 21 13:51:25 compute-0 ceph-mon[75031]: 6.5 scrub ok
Jan 21 13:51:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:25 compute-0 sudo[119334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cecpjrmaqglvamxqtlybfxiphuwmcohn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003485.4423165-226-70996149130058/AnsiballZ_timezone.py'
Jan 21 13:51:25 compute-0 sudo[119334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:26 compute-0 python3.9[119336]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 21 13:51:26 compute-0 systemd[1]: Starting Time & Date Service...
Jan 21 13:51:26 compute-0 systemd[1]: Started Time & Date Service.
Jan 21 13:51:26 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 21 13:51:26 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 21 13:51:26 compute-0 sudo[119334]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:26 compute-0 ceph-mon[75031]: 6.9 scrub starts
Jan 21 13:51:26 compute-0 ceph-mon[75031]: 6.9 scrub ok
Jan 21 13:51:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:26 compute-0 sudo[119490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytumoherumwksmonypulevlvnqsgpmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003486.5326314-235-161069841454569/AnsiballZ_file.py'
Jan 21 13:51:26 compute-0 sudo[119490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:27 compute-0 python3.9[119492]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:27 compute-0 sudo[119490]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:27 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 21 13:51:27 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 21 13:51:27 compute-0 ceph-mon[75031]: pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:27 compute-0 ceph-mon[75031]: 6.7 scrub starts
Jan 21 13:51:27 compute-0 ceph-mon[75031]: 6.7 scrub ok
Jan 21 13:51:27 compute-0 sudo[119642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozvirkvhqjrreplhcfazucycspyzytuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003487.2411191-243-209621196795640/AnsiballZ_stat.py'
Jan 21 13:51:27 compute-0 sudo[119642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:27 compute-0 python3.9[119644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:27 compute-0 sudo[119642]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:28 compute-0 sudo[119720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtdivpwzvmyxfffafscsiconwzoxazvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003487.2411191-243-209621196795640/AnsiballZ_file.py'
Jan 21 13:51:28 compute-0 sudo[119720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:28 compute-0 python3.9[119722]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:28 compute-0 sudo[119720]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:28 compute-0 ceph-mon[75031]: 6.3 scrub starts
Jan 21 13:51:28 compute-0 ceph-mon[75031]: 6.3 scrub ok
Jan 21 13:51:28 compute-0 sudo[119872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiyhpgofzcfdilchirhonezxuvusyrmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003488.5458817-255-144781872109306/AnsiballZ_stat.py'
Jan 21 13:51:28 compute-0 sudo[119872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:29 compute-0 python3.9[119874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:29 compute-0 sudo[119872]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:29 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 21 13:51:29 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 21 13:51:29 compute-0 sudo[119950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmkufptfwpyhspeeykbzrzdezbcqekz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003488.5458817-255-144781872109306/AnsiballZ_file.py'
Jan 21 13:51:29 compute-0 sudo[119950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:29 compute-0 ceph-mon[75031]: pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:29 compute-0 python3.9[119952]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.05m43szh recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:29 compute-0 sudo[119950]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:30 compute-0 sudo[120102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkqrjfmpnvodtzkgqxekjwiwewqldrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003489.7289045-267-115006552944776/AnsiballZ_stat.py'
Jan 21 13:51:30 compute-0 sudo[120102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:30 compute-0 python3.9[120104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:30 compute-0 sudo[120102]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:30 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 21 13:51:30 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 21 13:51:30 compute-0 ceph-mon[75031]: 6.0 scrub starts
Jan 21 13:51:30 compute-0 ceph-mon[75031]: 6.0 scrub ok
Jan 21 13:51:30 compute-0 sudo[120180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fllwvelrjcgocyqdevpitruwudabxlan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003489.7289045-267-115006552944776/AnsiballZ_file.py'
Jan 21 13:51:30 compute-0 sudo[120180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:30 compute-0 python3.9[120182]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:30 compute-0 sudo[120180]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:31 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 21 13:51:31 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 21 13:51:31 compute-0 sudo[120332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwxuenfactsfgbgzeoonisethruoywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003490.9440467-280-13009265354875/AnsiballZ_command.py'
Jan 21 13:51:31 compute-0 sudo[120332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:31 compute-0 ceph-mon[75031]: pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:31 compute-0 ceph-mon[75031]: 9.11 scrub starts
Jan 21 13:51:31 compute-0 ceph-mon[75031]: 9.11 scrub ok
Jan 21 13:51:31 compute-0 python3.9[120334]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:51:31 compute-0 sudo[120332]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:32 compute-0 sudo[120485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxtagyxjlrxllahcysfdccorufxacizh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003491.8461378-288-267494906443191/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 13:51:32 compute-0 sudo[120485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:32 compute-0 ceph-mon[75031]: 9.5 scrub starts
Jan 21 13:51:32 compute-0 ceph-mon[75031]: 9.5 scrub ok
Jan 21 13:51:32 compute-0 python3[120487]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 13:51:32 compute-0 sudo[120485]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:33 compute-0 sudo[120637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swtjzmuminleqbyrguhlunnxdgoxggrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003492.7316258-296-236853376271170/AnsiballZ_stat.py'
Jan 21 13:51:33 compute-0 sudo[120637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:33 compute-0 python3.9[120639]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:33 compute-0 sudo[120637]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:33 compute-0 ceph-mon[75031]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:33 compute-0 sudo[120715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmifsartxzzcsvsakbbdkiqzvxomirku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003492.7316258-296-236853376271170/AnsiballZ_file.py'
Jan 21 13:51:33 compute-0 sudo[120715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:33 compute-0 python3.9[120717]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:33 compute-0 sudo[120715]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:34 compute-0 sudo[120867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdzxpriqedhfbwfcotxqgsqivzqnyiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003493.8582091-308-149032794136228/AnsiballZ_stat.py'
Jan 21 13:51:34 compute-0 sudo[120867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:34 compute-0 python3.9[120869]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:34 compute-0 sudo[120867]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:34 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 21 13:51:34 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 21 13:51:34 compute-0 sudo[120993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuljugazfdnndwshmyzahidtuwblhkau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003493.8582091-308-149032794136228/AnsiballZ_copy.py'
Jan 21 13:51:34 compute-0 sudo[120993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:35 compute-0 python3.9[120995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003493.8582091-308-149032794136228/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:35 compute-0 sudo[120993]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:35 compute-0 ceph-mon[75031]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:35 compute-0 ceph-mon[75031]: 9.8 scrub starts
Jan 21 13:51:35 compute-0 ceph-mon[75031]: 9.8 scrub ok
Jan 21 13:51:35 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 21 13:51:35 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 21 13:51:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:35 compute-0 sudo[121145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtyxahagpefhevlkrscbrkuqrenrwmce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003495.346424-323-267757407940707/AnsiballZ_stat.py'
Jan 21 13:51:35 compute-0 sudo[121145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:35 compute-0 python3.9[121147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:35 compute-0 sudo[121145]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:36 compute-0 sudo[121223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjdqgacebgxavuonzlrvddimizxarls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003495.346424-323-267757407940707/AnsiballZ_file.py'
Jan 21 13:51:36 compute-0 sudo[121223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:36 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 21 13:51:36 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 21 13:51:36 compute-0 python3.9[121225]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:36 compute-0 sudo[121223]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:36 compute-0 ceph-mon[75031]: 10.14 scrub starts
Jan 21 13:51:36 compute-0 ceph-mon[75031]: 10.14 scrub ok
Jan 21 13:51:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:36 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 21 13:51:36 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 21 13:51:36 compute-0 sudo[121375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwyqafnzxmiytuijmwuxahvolrvicedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003496.5910587-335-136091717697554/AnsiballZ_stat.py'
Jan 21 13:51:36 compute-0 sudo[121375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:37 compute-0 python3.9[121377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:37 compute-0 sudo[121375]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:37 compute-0 sudo[121453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijjdizrvqzkjqpicjowhgwdxcnmziln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003496.5910587-335-136091717697554/AnsiballZ_file.py'
Jan 21 13:51:37 compute-0 sudo[121453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:37 compute-0 ceph-mon[75031]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:37 compute-0 ceph-mon[75031]: 9.16 scrub starts
Jan 21 13:51:37 compute-0 ceph-mon[75031]: 9.16 scrub ok
Jan 21 13:51:37 compute-0 ceph-mon[75031]: 9.e scrub starts
Jan 21 13:51:37 compute-0 ceph-mon[75031]: 9.e scrub ok
Jan 21 13:51:37 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 21 13:51:37 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 21 13:51:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:37 compute-0 python3.9[121455]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:37 compute-0 sudo[121453]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:38 compute-0 sudo[121605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cebsjwuyfdlfagygcuihcvbptaovybcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003497.8816652-347-186361521805117/AnsiballZ_stat.py'
Jan 21 13:51:38 compute-0 sudo[121605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:38 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 21 13:51:38 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 21 13:51:38 compute-0 python3.9[121607]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:38 compute-0 sudo[121605]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:38 compute-0 ceph-mon[75031]: 10.12 scrub starts
Jan 21 13:51:38 compute-0 ceph-mon[75031]: 10.12 scrub ok
Jan 21 13:51:38 compute-0 sudo[121683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbduhzwikdqdqzkfonobufcgmkwtzkxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003497.8816652-347-186361521805117/AnsiballZ_file.py'
Jan 21 13:51:38 compute-0 sudo[121683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:38 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 21 13:51:38 compute-0 python3.9[121685]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:38 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 21 13:51:38 compute-0 sudo[121683]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:39 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 21 13:51:39 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 21 13:51:39 compute-0 sudo[121835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejagroguhimypidgqvsnldkylhxdskm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003499.093645-360-106872576774509/AnsiballZ_command.py'
Jan 21 13:51:39 compute-0 sudo[121835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:39 compute-0 ceph-mon[75031]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:39 compute-0 ceph-mon[75031]: 9.b scrub starts
Jan 21 13:51:39 compute-0 ceph-mon[75031]: 9.b scrub ok
Jan 21 13:51:39 compute-0 ceph-mon[75031]: 9.17 scrub starts
Jan 21 13:51:39 compute-0 ceph-mon[75031]: 9.17 scrub ok
Jan 21 13:51:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:51:39
Jan 21 13:51:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:51:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:51:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', '.rgw.root']
Jan 21 13:51:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:51:39 compute-0 python3.9[121837]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:51:39 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 21 13:51:39 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 21 13:51:39 compute-0 sudo[121835]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:40 compute-0 sudo[121990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhdyywrrdrvoulqwebdnkuxbshrstnke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003499.7829087-368-260180202652536/AnsiballZ_blockinfile.py'
Jan 21 13:51:40 compute-0 sudo[121990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:40 compute-0 python3.9[121992]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:40 compute-0 sudo[121990]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:40 compute-0 ceph-mon[75031]: 9.9 scrub starts
Jan 21 13:51:40 compute-0 ceph-mon[75031]: 9.9 scrub ok
Jan 21 13:51:40 compute-0 ceph-mon[75031]: 6.2 scrub starts
Jan 21 13:51:40 compute-0 ceph-mon[75031]: 6.2 scrub ok
Jan 21 13:51:40 compute-0 sudo[122142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnimhdnxjrmswzgcqsluradvxpfjskni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003500.6530464-377-41734880907231/AnsiballZ_file.py'
Jan 21 13:51:40 compute-0 sudo[122142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:51:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:51:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:51:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:51:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:51:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:51:41 compute-0 python3.9[122144]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:41 compute-0 sudo[122142]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:41 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 21 13:51:41 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 21 13:51:41 compute-0 ceph-mon[75031]: pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:41 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 21 13:51:41 compute-0 sudo[122294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jufcmerskfkfaskuihauawihfdccxtzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003501.3157313-377-3346148105993/AnsiballZ_file.py'
Jan 21 13:51:41 compute-0 sudo[122294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:41 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 21 13:51:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:41 compute-0 python3.9[122296]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:41 compute-0 sudo[122294]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:42 compute-0 ceph-mon[75031]: 9.d scrub starts
Jan 21 13:51:42 compute-0 ceph-mon[75031]: 9.d scrub ok
Jan 21 13:51:42 compute-0 ceph-mon[75031]: 6.6 scrub starts
Jan 21 13:51:42 compute-0 ceph-mon[75031]: 6.6 scrub ok
Jan 21 13:51:42 compute-0 sudo[122446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcnedjpajbfcwvhqdvsdnrutslolesmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003502.0116544-392-62281502901314/AnsiballZ_mount.py'
Jan 21 13:51:42 compute-0 sudo[122446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:42 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 21 13:51:42 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 21 13:51:42 compute-0 python3.9[122448]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 13:51:42 compute-0 sudo[122446]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:42 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 21 13:51:42 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 21 13:51:43 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 21 13:51:43 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 21 13:51:43 compute-0 ceph-mon[75031]: pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:43 compute-0 ceph-mon[75031]: 6.4 scrub starts
Jan 21 13:51:43 compute-0 ceph-mon[75031]: 6.4 scrub ok
Jan 21 13:51:43 compute-0 ceph-mon[75031]: 9.f scrub starts
Jan 21 13:51:43 compute-0 ceph-mon[75031]: 9.f scrub ok
Jan 21 13:51:43 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 21 13:51:43 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 21 13:51:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:43 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 21 13:51:43 compute-0 sudo[122598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhzlgcplsruuozysfnxiscyhfbuxckcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003503.598181-392-162086819534521/AnsiballZ_mount.py'
Jan 21 13:51:43 compute-0 sudo[122598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:43 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 21 13:51:44 compute-0 python3.9[122600]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 21 13:51:44 compute-0 sudo[122598]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:44 compute-0 ceph-mon[75031]: 9.1 scrub starts
Jan 21 13:51:44 compute-0 ceph-mon[75031]: 9.1 scrub ok
Jan 21 13:51:44 compute-0 ceph-mon[75031]: 6.d scrub starts
Jan 21 13:51:44 compute-0 ceph-mon[75031]: 6.d scrub ok
Jan 21 13:51:44 compute-0 ceph-mon[75031]: 9.c scrub starts
Jan 21 13:51:44 compute-0 ceph-mon[75031]: 9.c scrub ok
Jan 21 13:51:44 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 21 13:51:44 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 21 13:51:44 compute-0 sshd-session[115386]: Connection closed by 192.168.122.30 port 40694
Jan 21 13:51:44 compute-0 sshd-session[115364]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:51:44 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 21 13:51:44 compute-0 systemd[1]: session-40.scope: Consumed 34.377s CPU time.
Jan 21 13:51:44 compute-0 systemd-logind[780]: Session 40 logged out. Waiting for processes to exit.
Jan 21 13:51:44 compute-0 systemd-logind[780]: Removed session 40.
Jan 21 13:51:45 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 21 13:51:45 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 21 13:51:45 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 21 13:51:45 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 21 13:51:45 compute-0 ceph-mon[75031]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:45 compute-0 ceph-mon[75031]: 6.e scrub starts
Jan 21 13:51:45 compute-0 ceph-mon[75031]: 6.e scrub ok
Jan 21 13:51:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:46 compute-0 ceph-mon[75031]: 9.3 scrub starts
Jan 21 13:51:46 compute-0 ceph-mon[75031]: 9.3 scrub ok
Jan 21 13:51:46 compute-0 ceph-mon[75031]: 6.1 scrub starts
Jan 21 13:51:46 compute-0 ceph-mon[75031]: 6.1 scrub ok
Jan 21 13:51:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:46 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 21 13:51:46 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 21 13:51:47 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 21 13:51:47 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 21 13:51:47 compute-0 ceph-mon[75031]: pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:47 compute-0 ceph-mon[75031]: 9.7 scrub starts
Jan 21 13:51:47 compute-0 ceph-mon[75031]: 9.7 scrub ok
Jan 21 13:51:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:47 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 21 13:51:47 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 21 13:51:48 compute-0 ceph-mon[75031]: 9.1d scrub starts
Jan 21 13:51:48 compute-0 ceph-mon[75031]: 9.1d scrub ok
Jan 21 13:51:48 compute-0 ceph-mon[75031]: 9.6 scrub starts
Jan 21 13:51:48 compute-0 ceph-mon[75031]: 9.6 scrub ok
Jan 21 13:51:49 compute-0 ceph-mon[75031]: pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:50 compute-0 sshd-session[122625]: Accepted publickey for zuul from 192.168.122.30 port 47416 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:51:50 compute-0 systemd-logind[780]: New session 41 of user zuul.
Jan 21 13:51:50 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 21 13:51:50 compute-0 sshd-session[122625]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:51:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:51:50 compute-0 ceph-mon[75031]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:50 compute-0 sudo[122778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvqblyoirkbozuvrjtrlubrrcfcdbzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003510.3066413-16-65743654493556/AnsiballZ_tempfile.py'
Jan 21 13:51:50 compute-0 sudo[122778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:50 compute-0 python3.9[122780]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 21 13:51:51 compute-0 sudo[122778]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:51 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 21 13:51:51 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 21 13:51:51 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 21 13:51:51 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 21 13:51:51 compute-0 sudo[122930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edtqalvfenbnzbknnxzvsqupgivogvqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003511.175589-28-230668226355728/AnsiballZ_stat.py'
Jan 21 13:51:51 compute-0 sudo[122930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:51 compute-0 ceph-mon[75031]: 6.c scrub starts
Jan 21 13:51:51 compute-0 ceph-mon[75031]: 6.c scrub ok
Jan 21 13:51:51 compute-0 python3.9[122932]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:51:51 compute-0 sudo[122930]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 21 13:51:51 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 21 13:51:52 compute-0 sudo[123084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzvlnpgsqiregadebmwbkknrvrzbftzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003511.9828954-36-194585626852722/AnsiballZ_slurp.py'
Jan 21 13:51:52 compute-0 sudo[123084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:52 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 21 13:51:52 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 21 13:51:52 compute-0 python3.9[123086]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 21 13:51:52 compute-0 sudo[123084]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:52 compute-0 ceph-mon[75031]: 9.1c scrub starts
Jan 21 13:51:52 compute-0 ceph-mon[75031]: 9.1c scrub ok
Jan 21 13:51:52 compute-0 ceph-mon[75031]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:52 compute-0 ceph-mon[75031]: 9.19 scrub starts
Jan 21 13:51:52 compute-0 ceph-mon[75031]: 9.19 scrub ok
Jan 21 13:51:52 compute-0 ceph-mon[75031]: 6.b scrub starts
Jan 21 13:51:52 compute-0 ceph-mon[75031]: 6.b scrub ok
Jan 21 13:51:53 compute-0 sudo[123236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqcqkofgagiqotkcwyvljrwahpwxyndc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003512.8335037-44-222600172076668/AnsiballZ_stat.py'
Jan 21 13:51:53 compute-0 sudo[123236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:53 compute-0 python3.9[123238]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.q8fk82h7 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:51:53 compute-0 sudo[123236]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:53 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 21 13:51:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:53 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 21 13:51:53 compute-0 ceph-mon[75031]: 9.15 scrub starts
Jan 21 13:51:53 compute-0 ceph-mon[75031]: 9.15 scrub ok
Jan 21 13:51:53 compute-0 sudo[123362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydpdvhgkvvdzimmlxfauzqxnhndmiwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003512.8335037-44-222600172076668/AnsiballZ_copy.py'
Jan 21 13:51:53 compute-0 sudo[123362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:54 compute-0 python3.9[123364]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.q8fk82h7 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003512.8335037-44-222600172076668/.source.q8fk82h7 _original_basename=.ux_nvr7p follow=False checksum=0f4b47c126c5fd5568004f40e6dcd1e127845364 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:54 compute-0 sudo[123362]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:54 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 21 13:51:54 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 21 13:51:54 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 21 13:51:54 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 21 13:51:54 compute-0 ceph-mon[75031]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:55 compute-0 sudo[123514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqumguxsvzqtepjzinlyodhsedbquiwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003514.3144422-59-81530242764655/AnsiballZ_setup.py'
Jan 21 13:51:55 compute-0 sudo[123514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:55 compute-0 python3.9[123516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:51:55 compute-0 sudo[123514]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:55 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 21 13:51:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:55 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 21 13:51:55 compute-0 ceph-mon[75031]: 9.1b scrub starts
Jan 21 13:51:55 compute-0 ceph-mon[75031]: 9.1b scrub ok
Jan 21 13:51:55 compute-0 ceph-mon[75031]: 9.18 scrub starts
Jan 21 13:51:55 compute-0 ceph-mon[75031]: 9.18 scrub ok
Jan 21 13:51:55 compute-0 ceph-mon[75031]: 9.10 scrub starts
Jan 21 13:51:55 compute-0 ceph-mon[75031]: 9.10 scrub ok
Jan 21 13:51:56 compute-0 sudo[123666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtyvawlrzlhjpcfwrlkqkkisqmpwiin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003515.588759-68-155031268212213/AnsiballZ_blockinfile.py'
Jan 21 13:51:56 compute-0 sudo[123666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:56 compute-0 python3.9[123668]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeFBF9sLBUut0jERuw8eMRSTmHQPq77CYOZnLVmOaBCBCSPbeUxgTSDGAypqgANDFspz2HthTRfZ/0obiaSrheRKp8JI8vmjOkZpbGmM9pA3z2/L+A3dJtYryJ7HhNyc/RGv6tDqg7CqaPNO1VlKkJaCblvoGA/sTsuLgg72/kyPlgz+xxZIIXUolJRTelowGJeLl4FZhJevZEH/0RgRZW5SIe7QgvHYRWR/yATnINpKKPRydWLgea+k//th3RGx9GuUGWuDCPeJvxRKrqAMI8uxmSm/8+i6EK0vVqkOdcdQRVsHY2r6DJ55kbxKE6zwdr/2TWUC4j2L+d8AvLLtPL6yx6yOUDHD9KicyxruiQYYwkskMnkAWJeSL1egxNDFgJCw7P56bEGIyFhPIAzxR1E0ZuAQqv/W1KYFqspYxqjsccWFRon0TW3DyHzXSXRZkvgVBAyZPlZBTcsw58X536t/6unFkYBPfaCNmQIGhaOZ0dFgK7Bl1Jj1cThi6d/bE=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINb+axAz9AQLLF8DlI2l4unh/lYce78aEpf6RASalCvh
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJ6/CEvuTJeUBrk8Nw85tSdtMYRRRBEbjPN601M+Wvbkfd6a4tr5R6VV6/ot3jZ0PwT+0BaXWVuiTlpRpxsLDo=
                                              create=True mode=0644 path=/tmp/ansible.q8fk82h7 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:56 compute-0 sudo[123666]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:56 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 21 13:51:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 21 13:51:56 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 21 13:51:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:51:56 compute-0 ceph-mon[75031]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:56 compute-0 ceph-mon[75031]: 9.12 scrub starts
Jan 21 13:51:56 compute-0 ceph-mon[75031]: 9.12 scrub ok
Jan 21 13:51:56 compute-0 sudo[123820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhrgdiwgdasusjsebkzevmhiksifiuhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003516.4252827-76-122160764837055/AnsiballZ_command.py'
Jan 21 13:51:56 compute-0 sudo[123820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:57 compute-0 python3.9[123822]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.q8fk82h7' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:51:57 compute-0 sudo[123820]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:57 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 21 13:51:57 compute-0 ceph-osd[85740]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 21 13:51:57 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 21 13:51:57 compute-0 sudo[123974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcnpymyviudgeaftdprsstmsmdluscuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003517.245408-84-64419706144635/AnsiballZ_file.py'
Jan 21 13:51:57 compute-0 sudo[123974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:51:57 compute-0 ceph-osd[87843]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 21 13:51:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:57 compute-0 python3.9[123976]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.q8fk82h7 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:51:57 compute-0 sudo[123974]: pam_unix(sudo:session): session closed for user root
Jan 21 13:51:58 compute-0 ceph-mon[75031]: 9.1e scrub starts
Jan 21 13:51:58 compute-0 ceph-mon[75031]: 9.1e scrub ok
Jan 21 13:51:58 compute-0 sshd-session[122628]: Connection closed by 192.168.122.30 port 47416
Jan 21 13:51:58 compute-0 sshd-session[122625]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:51:58 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 21 13:51:58 compute-0 systemd[1]: session-41.scope: Consumed 5.457s CPU time.
Jan 21 13:51:58 compute-0 systemd-logind[780]: Session 41 logged out. Waiting for processes to exit.
Jan 21 13:51:58 compute-0 systemd-logind[780]: Removed session 41.
Jan 21 13:51:58 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 21 13:51:58 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 21 13:51:59 compute-0 ceph-mon[75031]: 9.13 scrub starts
Jan 21 13:51:59 compute-0 ceph-mon[75031]: 9.13 scrub ok
Jan 21 13:51:59 compute-0 ceph-mon[75031]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:51:59 compute-0 ceph-mon[75031]: 9.14 scrub starts
Jan 21 13:51:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:00 compute-0 sudo[124001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:52:00 compute-0 sudo[124001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:00 compute-0 sudo[124001]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:00 compute-0 sudo[124026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:52:00 compute-0 sudo[124026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: 9.14 scrub ok
Jan 21 13:52:00 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 21 13:52:00 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 21 13:52:00 compute-0 sudo[124026]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:52:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:52:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:52:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:52:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:52:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:52:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:52:01 compute-0 sudo[124082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:52:01 compute-0 sudo[124082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:01 compute-0 sudo[124082]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:01 compute-0 sudo[124107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:52:01 compute-0 sudo[124107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:01 compute-0 ceph-mon[75031]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:01 compute-0 ceph-mon[75031]: 9.2 scrub starts
Jan 21 13:52:01 compute-0 ceph-mon[75031]: 9.2 scrub ok
Jan 21 13:52:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:52:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:52:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:52:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:52:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:52:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:52:01 compute-0 podman[124145]: 2026-01-21 13:52:01.38228561 +0000 UTC m=+0.063424220 container create a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:52:01 compute-0 podman[124145]: 2026-01-21 13:52:01.340401193 +0000 UTC m=+0.021539813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:52:01 compute-0 systemd[1]: Started libpod-conmon-a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037.scope.
Jan 21 13:52:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:52:01 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 21 13:52:01 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 21 13:52:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:01 compute-0 podman[124145]: 2026-01-21 13:52:01.854264688 +0000 UTC m=+0.535403358 container init a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:52:01 compute-0 podman[124145]: 2026-01-21 13:52:01.862719023 +0000 UTC m=+0.543857623 container start a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:52:01 compute-0 podman[124145]: 2026-01-21 13:52:01.868860123 +0000 UTC m=+0.549998803 container attach a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 13:52:01 compute-0 naughty_grothendieck[124161]: 167 167
Jan 21 13:52:01 compute-0 systemd[1]: libpod-a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037.scope: Deactivated successfully.
Jan 21 13:52:01 compute-0 podman[124145]: 2026-01-21 13:52:01.873405223 +0000 UTC m=+0.554543823 container died a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 13:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-edbb85369616af04b1120987fc5d5eef020e955b067e230138d92bf6a53f7fc2-merged.mount: Deactivated successfully.
Jan 21 13:52:02 compute-0 podman[124145]: 2026-01-21 13:52:02.059668055 +0000 UTC m=+0.740806665 container remove a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_grothendieck, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:52:02 compute-0 systemd[1]: libpod-conmon-a7b31c7e5570090ab4d642e93603b8b2efc308474240fdc2e97e635e91537037.scope: Deactivated successfully.
Jan 21 13:52:02 compute-0 podman[124184]: 2026-01-21 13:52:02.299267561 +0000 UTC m=+0.086531452 container create 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:52:02 compute-0 ceph-mon[75031]: 9.0 scrub starts
Jan 21 13:52:02 compute-0 ceph-mon[75031]: 9.0 scrub ok
Jan 21 13:52:02 compute-0 podman[124184]: 2026-01-21 13:52:02.24564626 +0000 UTC m=+0.032910161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:52:02 compute-0 systemd[1]: Started libpod-conmon-379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8.scope.
Jan 21 13:52:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:02 compute-0 podman[124184]: 2026-01-21 13:52:02.436153804 +0000 UTC m=+0.223417715 container init 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:52:02 compute-0 podman[124184]: 2026-01-21 13:52:02.447184572 +0000 UTC m=+0.234448443 container start 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:52:02 compute-0 podman[124184]: 2026-01-21 13:52:02.451489607 +0000 UTC m=+0.238753558 container attach 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:52:03 compute-0 suspicious_keller[124201]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:52:03 compute-0 suspicious_keller[124201]: --> All data devices are unavailable
Jan 21 13:52:03 compute-0 systemd[1]: libpod-379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8.scope: Deactivated successfully.
Jan 21 13:52:03 compute-0 podman[124184]: 2026-01-21 13:52:03.02377431 +0000 UTC m=+0.811038201 container died 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e88d9b03768cdfb76317dffb647fb0c23d2f74713d7957e0b3725a958f29af2f-merged.mount: Deactivated successfully.
Jan 21 13:52:03 compute-0 podman[124184]: 2026-01-21 13:52:03.077439892 +0000 UTC m=+0.864703743 container remove 379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 13:52:03 compute-0 systemd[1]: libpod-conmon-379400495ee6bac705cd02f3341b97f6c8387a1dd403d6c6d80e21f5d0204ee8.scope: Deactivated successfully.
Jan 21 13:52:03 compute-0 sudo[124107]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:03 compute-0 sudo[124236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:52:03 compute-0 sudo[124236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:03 compute-0 sudo[124236]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:03 compute-0 sudo[124261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:52:03 compute-0 sudo[124261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:03 compute-0 ceph-mon[75031]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:03 compute-0 sshd-session[124286]: Accepted publickey for zuul from 192.168.122.30 port 56642 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:52:03 compute-0 systemd-logind[780]: New session 42 of user zuul.
Jan 21 13:52:03 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 21 13:52:03 compute-0 sshd-session[124286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.550121148 +0000 UTC m=+0.052583698 container create 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:52:03 compute-0 systemd[1]: Started libpod-conmon-4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9.scope.
Jan 21 13:52:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.626220375 +0000 UTC m=+0.128682945 container init 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.533577516 +0000 UTC m=+0.036040086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.63223483 +0000 UTC m=+0.134697380 container start 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.635671974 +0000 UTC m=+0.138134554 container attach 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:52:03 compute-0 festive_booth[124368]: 167 167
Jan 21 13:52:03 compute-0 systemd[1]: libpod-4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9.scope: Deactivated successfully.
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.637636712 +0000 UTC m=+0.140099262 container died 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e37fcf2465918fd5a8cacbfa30ee02d7f334befa3114ecca5fbe27fa6e19a4d-merged.mount: Deactivated successfully.
Jan 21 13:52:03 compute-0 podman[124308]: 2026-01-21 13:52:03.676565127 +0000 UTC m=+0.179027677 container remove 4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 13:52:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:03 compute-0 systemd[1]: libpod-conmon-4efafd7847ae13d8d6255ab1af1be44882d9b4354f0d75b7b90da2995ddd18b9.scope: Deactivated successfully.
Jan 21 13:52:03 compute-0 podman[124391]: 2026-01-21 13:52:03.845322813 +0000 UTC m=+0.048317303 container create 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 13:52:03 compute-0 systemd[1]: Started libpod-conmon-78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19.scope.
Jan 21 13:52:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:52:03 compute-0 podman[124391]: 2026-01-21 13:52:03.824440676 +0000 UTC m=+0.027435176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:03 compute-0 podman[124391]: 2026-01-21 13:52:03.936537898 +0000 UTC m=+0.139532398 container init 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 13:52:03 compute-0 podman[124391]: 2026-01-21 13:52:03.950105707 +0000 UTC m=+0.153100197 container start 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 13:52:03 compute-0 podman[124391]: 2026-01-21 13:52:03.953969561 +0000 UTC m=+0.156964061 container attach 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:52:04 compute-0 naughty_hopper[124408]: {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:     "0": [
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:         {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "devices": [
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "/dev/loop3"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             ],
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_name": "ceph_lv0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_size": "21470642176",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "name": "ceph_lv0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "tags": {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cluster_name": "ceph",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.crush_device_class": "",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.encrypted": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.objectstore": "bluestore",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osd_id": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.type": "block",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.vdo": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.with_tpm": "0"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             },
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "type": "block",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "vg_name": "ceph_vg0"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:         }
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:     ],
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:     "1": [
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:         {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "devices": [
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "/dev/loop4"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             ],
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_name": "ceph_lv1",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_size": "21470642176",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "name": "ceph_lv1",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "tags": {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cluster_name": "ceph",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.crush_device_class": "",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.encrypted": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.objectstore": "bluestore",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osd_id": "1",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.type": "block",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.vdo": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.with_tpm": "0"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             },
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "type": "block",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "vg_name": "ceph_vg1"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:         }
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:     ],
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:     "2": [
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:         {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "devices": [
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "/dev/loop5"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             ],
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_name": "ceph_lv2",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_size": "21470642176",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "name": "ceph_lv2",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "tags": {
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.cluster_name": "ceph",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.crush_device_class": "",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.encrypted": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.objectstore": "bluestore",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osd_id": "2",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.type": "block",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.vdo": "0",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:                 "ceph.with_tpm": "0"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             },
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "type": "block",
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:             "vg_name": "ceph_vg2"
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:         }
Jan 21 13:52:04 compute-0 naughty_hopper[124408]:     ]
Jan 21 13:52:04 compute-0 naughty_hopper[124408]: }
Jan 21 13:52:04 compute-0 systemd[1]: libpod-78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19.scope: Deactivated successfully.
Jan 21 13:52:04 compute-0 podman[124513]: 2026-01-21 13:52:04.286838842 +0000 UTC m=+0.021878442 container died 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2e6071dc2800dc09a1d8faf6456e4346ed05e912e6fa0d4bedc3d6a615e0dd2-merged.mount: Deactivated successfully.
Jan 21 13:52:04 compute-0 podman[124513]: 2026-01-21 13:52:04.332322896 +0000 UTC m=+0.067362466 container remove 78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hopper, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:52:04 compute-0 systemd[1]: libpod-conmon-78570b2a9ccd07c3e74c14655e78999eb733a1412489ee7235f5c85eda72dc19.scope: Deactivated successfully.
Jan 21 13:52:04 compute-0 sudo[124261]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:04 compute-0 sudo[124528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:52:04 compute-0 sudo[124528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:04 compute-0 sudo[124528]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:04 compute-0 python3.9[124510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:52:04 compute-0 sudo[124553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:52:04 compute-0 sudo[124553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.761083594 +0000 UTC m=+0.043324233 container create 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:52:04 compute-0 systemd[1]: Started libpod-conmon-858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067.scope.
Jan 21 13:52:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.832838397 +0000 UTC m=+0.115079056 container init 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.740817023 +0000 UTC m=+0.023057712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.839305453 +0000 UTC m=+0.121546092 container start 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.842655465 +0000 UTC m=+0.124896114 container attach 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 21 13:52:04 compute-0 affectionate_snyder[124656]: 167 167
Jan 21 13:52:04 compute-0 systemd[1]: libpod-858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067.scope: Deactivated successfully.
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.846081048 +0000 UTC m=+0.128321727 container died 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 13:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba9a8a4c3e62d7b920d738a964ca9de20bda0f14fc00b8c22e0aa1e49284d35-merged.mount: Deactivated successfully.
Jan 21 13:52:04 compute-0 podman[124617]: 2026-01-21 13:52:04.881528438 +0000 UTC m=+0.163769077 container remove 858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:52:04 compute-0 systemd[1]: libpod-conmon-858f5fc35f739633c608cad226862efeccbe352a6791e5c7bef74aec5fadf067.scope: Deactivated successfully.
Jan 21 13:52:05 compute-0 podman[124709]: 2026-01-21 13:52:05.076887211 +0000 UTC m=+0.061577786 container create a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:52:05 compute-0 systemd[1]: Started libpod-conmon-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope.
Jan 21 13:52:05 compute-0 podman[124709]: 2026-01-21 13:52:05.046075323 +0000 UTC m=+0.030765958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:52:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:52:05 compute-0 podman[124709]: 2026-01-21 13:52:05.168754352 +0000 UTC m=+0.153444887 container init a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:52:05 compute-0 podman[124709]: 2026-01-21 13:52:05.175018763 +0000 UTC m=+0.159709338 container start a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 13:52:05 compute-0 podman[124709]: 2026-01-21 13:52:05.179273407 +0000 UTC m=+0.163963942 container attach a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:52:05 compute-0 ceph-mon[75031]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:05 compute-0 sudo[124814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwbcuvnbfouahesuiknpuvlkgzxuyzpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003524.792394-27-25153631109/AnsiballZ_systemd.py'
Jan 21 13:52:05 compute-0 sudo[124814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:05 compute-0 python3.9[124816]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 13:52:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:05 compute-0 sudo[124814]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:05 compute-0 lvm[124911]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:52:05 compute-0 lvm[124908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:52:05 compute-0 lvm[124908]: VG ceph_vg0 finished
Jan 21 13:52:05 compute-0 lvm[124911]: VG ceph_vg1 finished
Jan 21 13:52:05 compute-0 lvm[124921]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:52:05 compute-0 lvm[124921]: VG ceph_vg2 finished
Jan 21 13:52:05 compute-0 bold_villani[124726]: {}
Jan 21 13:52:05 compute-0 systemd[1]: libpod-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope: Deactivated successfully.
Jan 21 13:52:05 compute-0 systemd[1]: libpod-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope: Consumed 1.312s CPU time.
Jan 21 13:52:05 compute-0 podman[124709]: 2026-01-21 13:52:05.997818588 +0000 UTC m=+0.982509123 container died a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-52be84a38543593e60d9832956efd4f698865009d4ede44cea4a661c01fb94e4-merged.mount: Deactivated successfully.
Jan 21 13:52:06 compute-0 podman[124709]: 2026-01-21 13:52:06.056539824 +0000 UTC m=+1.041230389 container remove a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:52:06 compute-0 systemd[1]: libpod-conmon-a84a5fe3cf1ca0544c6db400abb7094c8c308f3462a5491708c968ebba87b4af.scope: Deactivated successfully.
Jan 21 13:52:06 compute-0 sudo[124553]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:52:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:52:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:52:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:52:06 compute-0 sudo[125022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:52:06 compute-0 sudo[125022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:52:06 compute-0 sudo[125022]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:06 compute-0 sudo[125072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dolkqvdnbdynndgcipyyfsgavsnkrdog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003525.8840764-35-78914649003839/AnsiballZ_systemd.py'
Jan 21 13:52:06 compute-0 sudo[125072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:06 compute-0 python3.9[125075]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:52:06 compute-0 sudo[125072]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:07 compute-0 ceph-mon[75031]: pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:52:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:52:07 compute-0 sudo[125226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutrwfunnlveahmrhkafhoawmiznxvxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003526.7137437-44-116853439358145/AnsiballZ_command.py'
Jan 21 13:52:07 compute-0 sudo[125226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:07 compute-0 python3.9[125228]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:52:07 compute-0 sudo[125226]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:07 compute-0 sudo[125379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqpgszgfqijggkfrkdqkqdfpqajdxllk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003527.5391712-52-2614473852904/AnsiballZ_stat.py'
Jan 21 13:52:07 compute-0 sudo[125379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:08 compute-0 python3.9[125381]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:52:08 compute-0 sudo[125379]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:08 compute-0 sudo[125531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qujbsksdkolgvzpxuthdfvoeyjxxqewn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003528.3809016-61-107788860219898/AnsiballZ_file.py'
Jan 21 13:52:08 compute-0 sudo[125531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:09 compute-0 python3.9[125533]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:09 compute-0 sudo[125531]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:09 compute-0 sshd-session[124289]: Connection closed by 192.168.122.30 port 56642
Jan 21 13:52:09 compute-0 sshd-session[124286]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:52:09 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 21 13:52:09 compute-0 systemd[1]: session-42.scope: Consumed 3.909s CPU time.
Jan 21 13:52:09 compute-0 systemd-logind[780]: Session 42 logged out. Waiting for processes to exit.
Jan 21 13:52:09 compute-0 systemd-logind[780]: Removed session 42.
Jan 21 13:52:09 compute-0 ceph-mon[75031]: pgmap v337: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:10 compute-0 ceph-mon[75031]: pgmap v338: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:52:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:52:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:52:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:52:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:52:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:52:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:12 compute-0 ceph-mon[75031]: pgmap v339: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:14 compute-0 sshd-session[125558]: Accepted publickey for zuul from 192.168.122.30 port 57952 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:52:14 compute-0 systemd-logind[780]: New session 43 of user zuul.
Jan 21 13:52:14 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 21 13:52:14 compute-0 sshd-session[125558]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:52:14 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 21 13:52:14 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 21 13:52:15 compute-0 ceph-mon[75031]: pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:15 compute-0 ceph-mon[75031]: 9.a scrub starts
Jan 21 13:52:15 compute-0 ceph-mon[75031]: 9.a scrub ok
Jan 21 13:52:15 compute-0 python3.9[125711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:52:15 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 21 13:52:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:15 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 21 13:52:16 compute-0 sudo[125865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjlhtehwfyrfifqesevrualpdmtwamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003536.0474832-29-196724530632064/AnsiballZ_setup.py'
Jan 21 13:52:16 compute-0 sudo[125865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:16 compute-0 ceph-mon[75031]: 9.4 scrub starts
Jan 21 13:52:16 compute-0 ceph-mon[75031]: 9.4 scrub ok
Jan 21 13:52:16 compute-0 python3.9[125867]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:52:16 compute-0 sudo[125865]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:17 compute-0 sudo[125949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohekenodicwsfhjjkdsgwmsjrkerdfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003536.0474832-29-196724530632064/AnsiballZ_dnf.py'
Jan 21 13:52:17 compute-0 sudo[125949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:17 compute-0 python3.9[125951]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 21 13:52:17 compute-0 ceph-mon[75031]: pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:18 compute-0 sshd-session[71202]: Received disconnect from 38.102.83.129 port 39144:11: disconnected by user
Jan 21 13:52:18 compute-0 sshd-session[71202]: Disconnected from user zuul 38.102.83.129 port 39144
Jan 21 13:52:18 compute-0 sshd-session[71199]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:52:18 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 21 13:52:18 compute-0 systemd[1]: session-18.scope: Consumed 1min 37.243s CPU time.
Jan 21 13:52:18 compute-0 systemd-logind[780]: Session 18 logged out. Waiting for processes to exit.
Jan 21 13:52:18 compute-0 systemd-logind[780]: Removed session 18.
Jan 21 13:52:18 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 21 13:52:18 compute-0 ceph-mon[75031]: pgmap v342: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:18 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 21 13:52:18 compute-0 sudo[125949]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:19 compute-0 python3.9[126102]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:52:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:19 compute-0 ceph-mon[75031]: 9.1a scrub starts
Jan 21 13:52:19 compute-0 ceph-mon[75031]: 9.1a scrub ok
Jan 21 13:52:20 compute-0 ceph-mon[75031]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:20 compute-0 python3.9[126253]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 13:52:21 compute-0 python3.9[126403]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:52:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:22 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 21 13:52:22 compute-0 ceph-osd[86795]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 21 13:52:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:23 compute-0 python3.9[126553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:52:23 compute-0 sshd-session[125561]: Connection closed by 192.168.122.30 port 57952
Jan 21 13:52:23 compute-0 sshd-session[125558]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:52:23 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 21 13:52:23 compute-0 systemd[1]: session-43.scope: Consumed 6.077s CPU time.
Jan 21 13:52:23 compute-0 systemd-logind[780]: Session 43 logged out. Waiting for processes to exit.
Jan 21 13:52:23 compute-0 systemd-logind[780]: Removed session 43.
Jan 21 13:52:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:23 compute-0 ceph-mon[75031]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:23 compute-0 ceph-mon[75031]: 9.1f scrub starts
Jan 21 13:52:23 compute-0 ceph-mon[75031]: 9.1f scrub ok
Jan 21 13:52:25 compute-0 ceph-mon[75031]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:26 compute-0 ceph-mon[75031]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:28 compute-0 sshd-session[126578]: Accepted publickey for zuul from 192.168.122.30 port 37272 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:52:28 compute-0 systemd-logind[780]: New session 44 of user zuul.
Jan 21 13:52:28 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 21 13:52:28 compute-0 sshd-session[126578]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:52:29 compute-0 ceph-mon[75031]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:29 compute-0 python3.9[126731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:52:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:31 compute-0 sudo[126885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmdtpbqpnwemsbrtbgvywjlduxqbeflu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003550.7164907-45-151617832143576/AnsiballZ_file.py'
Jan 21 13:52:31 compute-0 sudo[126885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:31 compute-0 python3.9[126887]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:31 compute-0 sudo[126885]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:31 compute-0 ceph-mon[75031]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:31 compute-0 sudo[127037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccrdojwydjglfciweevpzgubeltrtiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003551.4872465-45-246370011731235/AnsiballZ_file.py'
Jan 21 13:52:31 compute-0 sudo[127037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:31 compute-0 python3.9[127039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:31 compute-0 sudo[127037]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:32 compute-0 sudo[127189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmohdpsmphguauddapjfxjemrmpgfpcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003552.181891-60-216928368491860/AnsiballZ_stat.py'
Jan 21 13:52:32 compute-0 sudo[127189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:32 compute-0 ceph-mon[75031]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:32 compute-0 python3.9[127191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:32 compute-0 sudo[127189]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:33 compute-0 sudo[127312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grcxfpbqrirsgyhspbbduwdrwiinjbhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003552.181891-60-216928368491860/AnsiballZ_copy.py'
Jan 21 13:52:33 compute-0 sudo[127312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:33 compute-0 python3.9[127314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003552.181891-60-216928368491860/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f0cd73048907ee5aa263a36f175e7caed7c19b62 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:33 compute-0 sudo[127312]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:34 compute-0 sudo[127464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashmuuxenjbngsfpmirlezhpewasdxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003553.7422426-60-114502039322093/AnsiballZ_stat.py'
Jan 21 13:52:34 compute-0 sudo[127464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:34 compute-0 python3.9[127466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:34 compute-0 sudo[127464]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:34 compute-0 sudo[127587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-salyrbnbdgnbfjpqjfhanbukfmfhrdvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003553.7422426-60-114502039322093/AnsiballZ_copy.py'
Jan 21 13:52:34 compute-0 sudo[127587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:34 compute-0 python3.9[127589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003553.7422426-60-114502039322093/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=621aaf2369cfa3ba36d9b9152ccf63ed8c29f884 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:34 compute-0 sudo[127587]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:35 compute-0 ceph-mon[75031]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:35 compute-0 sudo[127739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmvjvqceewzurcrzhopnqoobpgmhknl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003555.063375-60-76524087930483/AnsiballZ_stat.py'
Jan 21 13:52:35 compute-0 sudo[127739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:35 compute-0 python3.9[127741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:35 compute-0 sudo[127739]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:35 compute-0 sudo[127862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkxtnnepfrmxcagwlbdzbmnvcutgjdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003555.063375-60-76524087930483/AnsiballZ_copy.py'
Jan 21 13:52:35 compute-0 sudo[127862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:36 compute-0 python3.9[127864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003555.063375-60-76524087930483/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4a0995998f2033012989bc7ba668121e800b17e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:36 compute-0 sudo[127862]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:36 compute-0 sudo[128014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiztnpyarjziamuncxmssloynoqmbxlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003556.3599234-104-124718835370386/AnsiballZ_file.py'
Jan 21 13:52:36 compute-0 sudo[128014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:36 compute-0 python3.9[128016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:36 compute-0 sudo[128014]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:37 compute-0 sudo[128167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spuxwjhifowlzjpxrefjxzlrwdheovqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003557.056349-104-215404279858594/AnsiballZ_file.py'
Jan 21 13:52:37 compute-0 sudo[128167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:37 compute-0 ceph-mon[75031]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:37 compute-0 python3.9[128169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:37 compute-0 sudo[128167]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:38 compute-0 sudo[128319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvcacoukfcytpxgkyvtjzvcseribjsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003557.7472303-119-167956625773194/AnsiballZ_stat.py'
Jan 21 13:52:38 compute-0 sudo[128319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:38 compute-0 python3.9[128321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:38 compute-0 sudo[128319]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:38 compute-0 sudo[128442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmlejmvvocyerdsduftsxrwgrutiolt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003557.7472303-119-167956625773194/AnsiballZ_copy.py'
Jan 21 13:52:38 compute-0 sudo[128442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:38 compute-0 ceph-mon[75031]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:38 compute-0 python3.9[128444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003557.7472303-119-167956625773194/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=87373f7552cc80bec6e873c7b99dfa988b06eeea backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:38 compute-0 sudo[128442]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:39 compute-0 sudo[128594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvpsskovtemhjympyegqhuaydmcycyny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003558.963925-119-252127865520531/AnsiballZ_stat.py'
Jan 21 13:52:39 compute-0 sudo[128594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:39 compute-0 python3.9[128596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:39 compute-0 sudo[128594]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:52:39
Jan 21 13:52:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:52:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:52:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 21 13:52:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:52:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:39 compute-0 sudo[128717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukaycmfgbuwkiidiwvjuzzvfnbznssau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003558.963925-119-252127865520531/AnsiballZ_copy.py'
Jan 21 13:52:39 compute-0 sudo[128717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:39 compute-0 python3.9[128719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003558.963925-119-252127865520531/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4e90a8a3f55f20db41805edac4f667e0b6bdddc6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:39 compute-0 sudo[128717]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:40 compute-0 sudo[128869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzjpfbseutahpvokkyrmqdfdycbgvmdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003560.1222758-119-226576297937512/AnsiballZ_stat.py'
Jan 21 13:52:40 compute-0 sudo[128869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:40 compute-0 python3.9[128871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:40 compute-0 sudo[128869]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:40 compute-0 ceph-mon[75031]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:40 compute-0 sudo[128992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawknnbkmivdayanlcmqgagtocpunqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003560.1222758-119-226576297937512/AnsiballZ_copy.py'
Jan 21 13:52:40 compute-0 sudo[128992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:52:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:52:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:52:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:52:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:52:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:52:41 compute-0 python3.9[128994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003560.1222758-119-226576297937512/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e8eb98a943f8b191d5c4163b48c22ed2f7d7cf7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:41 compute-0 sudo[128992]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:41 compute-0 sudo[129144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplclhbvcyvzxeezczoijvgrjjmycqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003561.3588715-163-108541613429047/AnsiballZ_file.py'
Jan 21 13:52:41 compute-0 sudo[129144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:41 compute-0 python3.9[129146]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:41 compute-0 sudo[129144]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:42 compute-0 sudo[129296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfitcnualjbkjgyruqihnkmsnqngzgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003562.0147812-163-221381380604629/AnsiballZ_file.py'
Jan 21 13:52:42 compute-0 sudo[129296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:42 compute-0 python3.9[129298]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:42 compute-0 sudo[129296]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:42 compute-0 ceph-mon[75031]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:43 compute-0 sudo[129448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srygebivfonrfvsczzsqdxhcmvvqbopc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003562.7096884-178-67789584894166/AnsiballZ_stat.py'
Jan 21 13:52:43 compute-0 sudo[129448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:43 compute-0 python3.9[129450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:43 compute-0 sudo[129448]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:43 compute-0 sudo[129571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewusojnefvnesqzjbxfdvctwwkjcpsvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003562.7096884-178-67789584894166/AnsiballZ_copy.py'
Jan 21 13:52:43 compute-0 sudo[129571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:43 compute-0 python3.9[129573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003562.7096884-178-67789584894166/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=53aa8456c1d094ebc5c2b2d2ac43375382ab931d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:43 compute-0 sudo[129571]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:44 compute-0 sudo[129723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrjnacamcfkvusjufiwklcjdgnbomme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003563.9778585-178-10773835385543/AnsiballZ_stat.py'
Jan 21 13:52:44 compute-0 sudo[129723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:44 compute-0 python3.9[129725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:44 compute-0 sudo[129723]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:45 compute-0 sudo[129846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpfwrthndnoxcegavyjjzdhcfselkkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003563.9778585-178-10773835385543/AnsiballZ_copy.py'
Jan 21 13:52:45 compute-0 sudo[129846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:46 compute-0 ceph-mon[75031]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:47 compute-0 python3.9[129848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003563.9778585-178-10773835385543/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4e90a8a3f55f20db41805edac4f667e0b6bdddc6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:47 compute-0 sudo[129846]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:47 compute-0 sudo[129998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cszprletytzbebrchulmpetxupoqkcmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003567.1920822-178-630904898664/AnsiballZ_stat.py'
Jan 21 13:52:47 compute-0 sudo[129998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:47 compute-0 python3.9[130000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:47 compute-0 sudo[129998]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:48 compute-0 sudo[130121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhrbbxgehifvaryolftmtbsmvmvqttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003567.1920822-178-630904898664/AnsiballZ_copy.py'
Jan 21 13:52:48 compute-0 sudo[130121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:48 compute-0 ceph-mon[75031]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:48 compute-0 python3.9[130123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003567.1920822-178-630904898664/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ab87c69949efff77275cd43a8f928010d3306784 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:48 compute-0 sudo[130121]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:49 compute-0 ceph-mon[75031]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:49 compute-0 sudo[130273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujuvmbtysufchzzpmrfoppjmmynlgzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003569.021917-238-256938652978980/AnsiballZ_file.py'
Jan 21 13:52:49 compute-0 sudo[130273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:49 compute-0 python3.9[130275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:49 compute-0 sudo[130273]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:49 compute-0 sudo[130425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugyxyrfppyxojfczbhhfiznfneedoee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003569.677586-246-29049587607302/AnsiballZ_stat.py'
Jan 21 13:52:50 compute-0 sudo[130425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:50 compute-0 python3.9[130427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:50 compute-0 sudo[130425]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:52:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:52:50 compute-0 sudo[130548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgjgcybsticrvhclrvsfttsaqlcyccbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003569.677586-246-29049587607302/AnsiballZ_copy.py'
Jan 21 13:52:50 compute-0 sudo[130548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:50 compute-0 python3.9[130550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003569.677586-246-29049587607302/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:50 compute-0 sudo[130548]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:51 compute-0 sudo[130700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metqqxvblarjiqltdbwypgryejhpuzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003571.016937-262-125189761517465/AnsiballZ_file.py'
Jan 21 13:52:51 compute-0 sudo[130700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:51 compute-0 python3.9[130702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:51 compute-0 sudo[130700]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:52 compute-0 sudo[130852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ircepjpidnghzzfryjwziudpuzffjyhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003571.7837162-270-155204276527152/AnsiballZ_stat.py'
Jan 21 13:52:52 compute-0 sudo[130852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:52 compute-0 ceph-mon[75031]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:52 compute-0 python3.9[130854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:52 compute-0 sudo[130852]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:52 compute-0 sudo[130975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irzmeweksxartwuaurmnkiemyhcmxngd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003571.7837162-270-155204276527152/AnsiballZ_copy.py'
Jan 21 13:52:52 compute-0 sudo[130975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:52 compute-0 python3.9[130977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003571.7837162-270-155204276527152/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:52 compute-0 sudo[130975]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:53 compute-0 ceph-mon[75031]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:53 compute-0 sudo[131127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twaumzazmmopwvvmaprquxxtjtkjdrrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003573.0625193-286-137237172427525/AnsiballZ_file.py'
Jan 21 13:52:53 compute-0 sudo[131127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:53 compute-0 python3.9[131129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:53 compute-0 sudo[131127]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:54 compute-0 sudo[131279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrtmepyyvlkjcvzywytwycfabbjziudm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003573.8766284-294-220471936710064/AnsiballZ_stat.py'
Jan 21 13:52:54 compute-0 sudo[131279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:54 compute-0 python3.9[131281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:54 compute-0 sudo[131279]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:54 compute-0 sudo[131402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytjoyiqempbhvqorwtdzgqkuurbvkqwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003573.8766284-294-220471936710064/AnsiballZ_copy.py'
Jan 21 13:52:54 compute-0 sudo[131402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:55 compute-0 python3.9[131404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003573.8766284-294-220471936710064/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:55 compute-0 sudo[131402]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:55 compute-0 ceph-mon[75031]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:55 compute-0 sudo[131554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jipfgmyadthnfetlakgyvjtxsadouckp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003575.2665431-310-201366877728158/AnsiballZ_file.py'
Jan 21 13:52:55 compute-0 sudo[131554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:55 compute-0 python3.9[131556]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:55 compute-0 sudo[131554]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:56 compute-0 sudo[131706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohaxtwteqqnfiwqlyfosnfgumpczlvfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003575.989661-318-220574947470297/AnsiballZ_stat.py'
Jan 21 13:52:56 compute-0 sudo[131706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:56 compute-0 python3.9[131708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:56 compute-0 sudo[131706]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:56 compute-0 sudo[131829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otipuvgktotaequchqcliiiojnckercq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003575.989661-318-220574947470297/AnsiballZ_copy.py'
Jan 21 13:52:56 compute-0 sudo[131829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:57 compute-0 python3.9[131831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003575.989661-318-220574947470297/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:57 compute-0 sudo[131829]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:57 compute-0 ceph-mon[75031]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:57 compute-0 sudo[131981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzjjoqlnmreyfjhjokopvjfkdtxzosi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003577.2656455-334-26110703382759/AnsiballZ_file.py'
Jan 21 13:52:57 compute-0 sudo[131981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:57 compute-0 python3.9[131983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:57 compute-0 sudo[131981]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:52:58 compute-0 sudo[132133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvdxadxplzrsxpmxqyyabonuowwqpib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003577.9658418-342-11136171326849/AnsiballZ_stat.py'
Jan 21 13:52:58 compute-0 sudo[132133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:58 compute-0 python3.9[132135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:52:58 compute-0 sudo[132133]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:58 compute-0 sudo[132256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofosiarfmhvkcjesedomuussrmumnpjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003577.9658418-342-11136171326849/AnsiballZ_copy.py'
Jan 21 13:52:58 compute-0 sudo[132256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:59 compute-0 python3.9[132258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003577.9658418-342-11136171326849/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:52:59 compute-0 sudo[132256]: pam_unix(sudo:session): session closed for user root
Jan 21 13:52:59 compute-0 ceph-mon[75031]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:59 compute-0 sudo[132408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uguyqqtlxispscvaqljyowskgfrsqyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003579.361142-358-176906876158206/AnsiballZ_file.py'
Jan 21 13:52:59 compute-0 sudo[132408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:52:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:52:59 compute-0 python3.9[132410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:52:59 compute-0 sudo[132408]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:00 compute-0 sudo[132560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqzqxejnuriwlfhvbazzywkturjcfyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003580.1260636-366-227236086502854/AnsiballZ_stat.py'
Jan 21 13:53:00 compute-0 sudo[132560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:00 compute-0 python3.9[132562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:00 compute-0 sudo[132560]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:01 compute-0 sudo[132683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfzcglotbwtmrrtxurmsnsiluvxkqxmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003580.1260636-366-227236086502854/AnsiballZ_copy.py'
Jan 21 13:53:01 compute-0 sudo[132683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:01 compute-0 python3.9[132685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003580.1260636-366-227236086502854/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ac64e0e6ed3b9aa17fd22f147080322e8c52f52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:01 compute-0 sudo[132683]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:01 compute-0 ceph-mon[75031]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:01 compute-0 sshd-session[126581]: Connection closed by 192.168.122.30 port 37272
Jan 21 13:53:01 compute-0 sshd-session[126578]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:53:01 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 21 13:53:01 compute-0 systemd[1]: session-44.scope: Consumed 24.413s CPU time.
Jan 21 13:53:01 compute-0 systemd-logind[780]: Session 44 logged out. Waiting for processes to exit.
Jan 21 13:53:01 compute-0 systemd-logind[780]: Removed session 44.
Jan 21 13:53:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:03 compute-0 ceph-mon[75031]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:05 compute-0 ceph-mon[75031]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:06 compute-0 sudo[132710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:53:06 compute-0 sudo[132710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:06 compute-0 sudo[132710]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:06 compute-0 sudo[132735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:53:06 compute-0 sudo[132735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:06 compute-0 sudo[132735]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:53:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:53:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:53:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:53:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:53:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:53:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:53:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:53:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:53:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:53:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:53:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:53:07 compute-0 sudo[132791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:53:07 compute-0 sudo[132791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:07 compute-0 sudo[132791]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:07 compute-0 sudo[132816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:53:07 compute-0 sudo[132816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:07 compute-0 sshd-session[132834]: Accepted publickey for zuul from 192.168.122.30 port 52810 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:53:07 compute-0 systemd-logind[780]: New session 45 of user zuul.
Jan 21 13:53:07 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 21 13:53:07 compute-0 sshd-session[132834]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.471393132 +0000 UTC m=+0.049091973 container create 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:53:07 compute-0 systemd[1]: Started libpod-conmon-69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78.scope.
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.450916288 +0000 UTC m=+0.028615119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:53:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.567979607 +0000 UTC m=+0.145678428 container init 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.575148487 +0000 UTC m=+0.152847298 container start 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.579018129 +0000 UTC m=+0.156716960 container attach 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 13:53:07 compute-0 vigorous_spence[132924]: 167 167
Jan 21 13:53:07 compute-0 systemd[1]: libpod-69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78.scope: Deactivated successfully.
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.58287431 +0000 UTC m=+0.160573121 container died 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 13:53:07 compute-0 ceph-mon[75031]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:53:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:53:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:53:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:53:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:53:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ee0f1ec70c0473925cd0c83b45234f4df4f33fa81682780aac759494a9b46da-merged.mount: Deactivated successfully.
Jan 21 13:53:07 compute-0 podman[132908]: 2026-01-21 13:53:07.717401374 +0000 UTC m=+0.295100225 container remove 69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 21 13:53:07 compute-0 systemd[1]: libpod-conmon-69a9fa4acc2f96d1c36084ff3c4234dc7e5fc2e6fef83b7f1c1dcc0e53328e78.scope: Deactivated successfully.
Jan 21 13:53:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:07 compute-0 podman[133019]: 2026-01-21 13:53:07.885340868 +0000 UTC m=+0.038653706 container create c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:53:07 compute-0 sudo[133059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbehawmodjjvkcvoyurulsocswneihu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003587.3420157-17-279392999113251/AnsiballZ_file.py'
Jan 21 13:53:07 compute-0 sudo[133059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:07 compute-0 systemd[1]: Started libpod-conmon-c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf.scope.
Jan 21 13:53:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:07 compute-0 podman[133019]: 2026-01-21 13:53:07.867737601 +0000 UTC m=+0.021050449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:53:07 compute-0 podman[133019]: 2026-01-21 13:53:07.978939824 +0000 UTC m=+0.132252652 container init c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:53:07 compute-0 podman[133019]: 2026-01-21 13:53:07.986322588 +0000 UTC m=+0.139635406 container start c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 13:53:07 compute-0 podman[133019]: 2026-01-21 13:53:07.989679147 +0000 UTC m=+0.142991965 container attach c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 13:53:08 compute-0 python3.9[133061]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:08 compute-0 sudo[133059]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:08 compute-0 wizardly_dijkstra[133064]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:53:08 compute-0 wizardly_dijkstra[133064]: --> All data devices are unavailable
Jan 21 13:53:08 compute-0 systemd[1]: libpod-c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf.scope: Deactivated successfully.
Jan 21 13:53:08 compute-0 podman[133019]: 2026-01-21 13:53:08.496872591 +0000 UTC m=+0.650185439 container died c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 13:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-99256fb7434c01ea517f27685a847da952935c6ce8f3890cb13c6106ff63ba9f-merged.mount: Deactivated successfully.
Jan 21 13:53:08 compute-0 podman[133019]: 2026-01-21 13:53:08.555329304 +0000 UTC m=+0.708642122 container remove c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:53:08 compute-0 systemd[1]: libpod-conmon-c7b37c24fd19176389e6ccd3454270d22bd6adacd17dc01115fe6e4c076a6eaf.scope: Deactivated successfully.
Jan 21 13:53:08 compute-0 sudo[132816]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:08 compute-0 sudo[133204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:53:08 compute-0 sudo[133204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:08 compute-0 sudo[133204]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:08 compute-0 sudo[133252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:53:08 compute-0 sudo[133252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:08 compute-0 sudo[133294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soxoyxxejhmglihxdsqgaghzsfysxhoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003588.3045938-29-171429541803179/AnsiballZ_stat.py'
Jan 21 13:53:08 compute-0 sudo[133294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:08 compute-0 python3.9[133298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:08 compute-0 sudo[133294]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.081851774 +0000 UTC m=+0.056033117 container create e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:53:09 compute-0 systemd[1]: Started libpod-conmon-e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609.scope.
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.04868395 +0000 UTC m=+0.022865333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:53:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.165652637 +0000 UTC m=+0.139833960 container init e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.176226378 +0000 UTC m=+0.150407681 container start e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.179773952 +0000 UTC m=+0.153955245 container attach e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 13:53:09 compute-0 affectionate_bardeen[133374]: 167 167
Jan 21 13:53:09 compute-0 systemd[1]: libpod-e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609.scope: Deactivated successfully.
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.184434723 +0000 UTC m=+0.158616056 container died e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 13:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf464fa6c25bbff2ff37d26bdcab941b1c234f363ed3a817e1a5bb8785e86ca1-merged.mount: Deactivated successfully.
Jan 21 13:53:09 compute-0 podman[133324]: 2026-01-21 13:53:09.225725589 +0000 UTC m=+0.199906892 container remove e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:53:09 compute-0 systemd[1]: libpod-conmon-e434275a0906e1c2cb709af946019014cfcaffa6582204c9a8c6d08e56bfb609.scope: Deactivated successfully.
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.40447876 +0000 UTC m=+0.045627281 container create 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 13:53:09 compute-0 systemd[1]: Started libpod-conmon-8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29.scope.
Jan 21 13:53:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.38675129 +0000 UTC m=+0.027899821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.492329279 +0000 UTC m=+0.133477810 container init 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.505656784 +0000 UTC m=+0.146805295 container start 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.508536872 +0000 UTC m=+0.149685383 container attach 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:53:09 compute-0 sudo[133493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajxgrjhtsqlqyqlonwkfsvemigmdornx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003588.3045938-29-171429541803179/AnsiballZ_copy.py'
Jan 21 13:53:09 compute-0 sudo[133493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:09 compute-0 ceph-mon[75031]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:09 compute-0 python3.9[133495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003588.3045938-29-171429541803179/.source.conf _original_basename=ceph.conf follow=False checksum=d208d2e00ec30de06826064adf7fede1b3379f31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:09 compute-0 sudo[133493]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:09 compute-0 hungry_payne[133451]: {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:     "0": [
Jan 21 13:53:09 compute-0 hungry_payne[133451]:         {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "devices": [
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "/dev/loop3"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             ],
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_name": "ceph_lv0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_size": "21470642176",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "name": "ceph_lv0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "tags": {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cluster_name": "ceph",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.crush_device_class": "",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.encrypted": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.objectstore": "bluestore",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osd_id": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.type": "block",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.vdo": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.with_tpm": "0"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             },
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "type": "block",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "vg_name": "ceph_vg0"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:         }
Jan 21 13:53:09 compute-0 hungry_payne[133451]:     ],
Jan 21 13:53:09 compute-0 hungry_payne[133451]:     "1": [
Jan 21 13:53:09 compute-0 hungry_payne[133451]:         {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "devices": [
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "/dev/loop4"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             ],
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_name": "ceph_lv1",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_size": "21470642176",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "name": "ceph_lv1",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "tags": {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cluster_name": "ceph",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.crush_device_class": "",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.encrypted": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.objectstore": "bluestore",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osd_id": "1",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.type": "block",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.vdo": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.with_tpm": "0"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             },
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "type": "block",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "vg_name": "ceph_vg1"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:         }
Jan 21 13:53:09 compute-0 hungry_payne[133451]:     ],
Jan 21 13:53:09 compute-0 hungry_payne[133451]:     "2": [
Jan 21 13:53:09 compute-0 hungry_payne[133451]:         {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "devices": [
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "/dev/loop5"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             ],
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_name": "ceph_lv2",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_size": "21470642176",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "name": "ceph_lv2",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "tags": {
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.cluster_name": "ceph",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.crush_device_class": "",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.encrypted": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.objectstore": "bluestore",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osd_id": "2",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.type": "block",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.vdo": "0",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:                 "ceph.with_tpm": "0"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             },
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "type": "block",
Jan 21 13:53:09 compute-0 hungry_payne[133451]:             "vg_name": "ceph_vg2"
Jan 21 13:53:09 compute-0 hungry_payne[133451]:         }
Jan 21 13:53:09 compute-0 hungry_payne[133451]:     ]
Jan 21 13:53:09 compute-0 hungry_payne[133451]: }
Jan 21 13:53:09 compute-0 systemd[1]: libpod-8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29.scope: Deactivated successfully.
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.834900636 +0000 UTC m=+0.476049157 container died 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aab7ead8d3f7209c7ff5c76cf51e75f22a09486e06bfc0bc6cace916aa709bc-merged.mount: Deactivated successfully.
Jan 21 13:53:09 compute-0 podman[133405]: 2026-01-21 13:53:09.888647248 +0000 UTC m=+0.529795769 container remove 8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:53:09 compute-0 systemd[1]: libpod-conmon-8c382e2ebacda0e5c710d81483880cff6dfba20d3dff883280739aff3e4d4c29.scope: Deactivated successfully.
Jan 21 13:53:09 compute-0 sudo[133252]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:09 compute-0 sudo[133570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:53:09 compute-0 sudo[133570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:10 compute-0 sudo[133570]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:10 compute-0 sudo[133627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:53:10 compute-0 sudo[133627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:10 compute-0 sudo[133710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrrlinonzilhvaypiumzerhkuygtehsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003589.9104726-29-216954102189545/AnsiballZ_stat.py'
Jan 21 13:53:10 compute-0 sudo[133710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.347330153 +0000 UTC m=+0.039731531 container create 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 13:53:10 compute-0 python3.9[133712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:10 compute-0 systemd[1]: Started libpod-conmon-905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46.scope.
Jan 21 13:53:10 compute-0 sudo[133710]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.420542496 +0000 UTC m=+0.112943884 container init 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.328712692 +0000 UTC m=+0.021114090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.426600339 +0000 UTC m=+0.119001717 container start 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.430256575 +0000 UTC m=+0.122657983 container attach 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:53:10 compute-0 jovial_cori[133741]: 167 167
Jan 21 13:53:10 compute-0 systemd[1]: libpod-905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46.scope: Deactivated successfully.
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.431477134 +0000 UTC m=+0.123878512 container died 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 13:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-96074004c301c3eeaecfa1a9a92483b8293da97581cfeeaf493b2bdbc57333bd-merged.mount: Deactivated successfully.
Jan 21 13:53:10 compute-0 podman[133725]: 2026-01-21 13:53:10.466523303 +0000 UTC m=+0.158924681 container remove 905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 13:53:10 compute-0 systemd[1]: libpod-conmon-905499da4ae3630ed78d84bc8e79cbc087e6d7dfad77cc83b6dd5782bf50ef46.scope: Deactivated successfully.
Jan 21 13:53:10 compute-0 podman[133824]: 2026-01-21 13:53:10.630477553 +0000 UTC m=+0.041726167 container create 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:53:10 compute-0 systemd[1]: Started libpod-conmon-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope.
Jan 21 13:53:10 compute-0 podman[133824]: 2026-01-21 13:53:10.61298009 +0000 UTC m=+0.024228724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:53:10 compute-0 sudo[133905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elxvhvgzbkzrblxwgobxmpolxgiqfbdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003589.9104726-29-216954102189545/AnsiballZ_copy.py'
Jan 21 13:53:10 compute-0 podman[133824]: 2026-01-21 13:53:10.750115025 +0000 UTC m=+0.161363659 container init 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:53:10 compute-0 sudo[133905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:10 compute-0 podman[133824]: 2026-01-21 13:53:10.759309392 +0000 UTC m=+0.170558006 container start 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 13:53:10 compute-0 podman[133824]: 2026-01-21 13:53:10.764984717 +0000 UTC m=+0.176233331 container attach 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:53:10 compute-0 python3.9[133908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003589.9104726-29-216954102189545/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=01672c665cebe1978e709c2eff9d48fb31c7992e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:10 compute-0 sudo[133905]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:53:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:53:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:53:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:53:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:53:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:53:11 compute-0 sshd-session[132844]: Connection closed by 192.168.122.30 port 52810
Jan 21 13:53:11 compute-0 sshd-session[132834]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:53:11 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 21 13:53:11 compute-0 systemd[1]: session-45.scope: Consumed 2.697s CPU time.
Jan 21 13:53:11 compute-0 systemd-logind[780]: Session 45 logged out. Waiting for processes to exit.
Jan 21 13:53:11 compute-0 systemd-logind[780]: Removed session 45.
Jan 21 13:53:11 compute-0 lvm[134008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:53:11 compute-0 lvm[134008]: VG ceph_vg1 finished
Jan 21 13:53:11 compute-0 lvm[134007]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:53:11 compute-0 lvm[134007]: VG ceph_vg0 finished
Jan 21 13:53:11 compute-0 lvm[134010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:53:11 compute-0 lvm[134010]: VG ceph_vg2 finished
Jan 21 13:53:11 compute-0 hungry_rhodes[133876]: {}
Jan 21 13:53:11 compute-0 systemd[1]: libpod-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope: Deactivated successfully.
Jan 21 13:53:11 compute-0 systemd[1]: libpod-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope: Consumed 1.382s CPU time.
Jan 21 13:53:11 compute-0 podman[133824]: 2026-01-21 13:53:11.599449276 +0000 UTC m=+1.010697890 container died 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:53:11 compute-0 ceph-mon[75031]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-62ec6e2ce2b8f64fcd7dff24d55f28a3763f730a7a8fb2b31890972ccd6196e1-merged.mount: Deactivated successfully.
Jan 21 13:53:11 compute-0 podman[133824]: 2026-01-21 13:53:11.652840819 +0000 UTC m=+1.064089433 container remove 59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 21 13:53:11 compute-0 systemd[1]: libpod-conmon-59609a5697175974d9b1bef4452ef01b0990d7ef3575738bb7c2f8e9adc18aaf.scope: Deactivated successfully.
Jan 21 13:53:11 compute-0 sudo[133627]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:53:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:53:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:53:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:53:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:11 compute-0 sudo[134025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:53:11 compute-0 sudo[134025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:53:11 compute-0 sudo[134025]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:53:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:53:12 compute-0 ceph-mon[75031]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:15 compute-0 ceph-mon[75031]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:17 compute-0 ceph-mon[75031]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:17 compute-0 sshd-session[134050]: Accepted publickey for zuul from 192.168.122.30 port 59590 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:53:17 compute-0 systemd-logind[780]: New session 46 of user zuul.
Jan 21 13:53:17 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 21 13:53:17 compute-0 sshd-session[134050]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:53:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:18 compute-0 python3.9[134203]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:53:19 compute-0 sudo[134357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvvvutvdjqjbthsjkkhkmoubqecfhvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003598.8071146-29-42921052980360/AnsiballZ_file.py'
Jan 21 13:53:19 compute-0 sudo[134357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:19 compute-0 ceph-mon[75031]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:19 compute-0 python3.9[134359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:53:19 compute-0 sudo[134357]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:19 compute-0 sudo[134509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydekypilfwlptbhltepaeoabxgtqlvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003599.630419-29-241413568383004/AnsiballZ_file.py'
Jan 21 13:53:19 compute-0 sudo[134509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:20 compute-0 python3.9[134511]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:53:20 compute-0 sudo[134509]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:20 compute-0 python3.9[134661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:53:21 compute-0 ceph-mon[75031]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:21 compute-0 sudo[134811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyjbuzwhiyjrnsokhlktxqacvnldqgus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003601.06504-52-162785989693408/AnsiballZ_seboolean.py'
Jan 21 13:53:21 compute-0 sudo[134811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:21 compute-0 python3.9[134813]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 13:53:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:22 compute-0 sudo[134811]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:23 compute-0 ceph-mon[75031]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:23 compute-0 sudo[134967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrljxzskmkxgiriuatuytkzaburrcrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003603.2175028-62-56748229067240/AnsiballZ_setup.py'
Jan 21 13:53:23 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 21 13:53:23 compute-0 sudo[134967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:23 compute-0 python3.9[134969]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:53:24 compute-0 sudo[134967]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:24 compute-0 sudo[135051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpylwoxfkyosiobtyoyuoevyceajuey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003603.2175028-62-56748229067240/AnsiballZ_dnf.py'
Jan 21 13:53:24 compute-0 sudo[135051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:24 compute-0 python3.9[135053]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:53:25 compute-0 ceph-mon[75031]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:25 compute-0 sudo[135051]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:26 compute-0 sudo[135204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmerqbiljjkfkshczqwkwnbrvbeysvjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003606.0754173-74-11740622092549/AnsiballZ_systemd.py'
Jan 21 13:53:26 compute-0 sudo[135204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:26 compute-0 ceph-mon[75031]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:26 compute-0 python3.9[135206]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:53:27 compute-0 sudo[135204]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:27 compute-0 sudo[135359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liwckpsgoqjkovjeuvrqccewkqfatakc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003607.3279738-82-272072704672870/AnsiballZ_edpm_nftables_snippet.py'
Jan 21 13:53:27 compute-0 sudo[135359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:27 compute-0 python3[135361]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 21 13:53:27 compute-0 sudo[135359]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.288616) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608288710, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1788, "num_deletes": 250, "total_data_size": 2483619, "memory_usage": 2536824, "flush_reason": "Manual Compaction"}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608306512, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1476589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7288, "largest_seqno": 9075, "table_properties": {"data_size": 1470694, "index_size": 2650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18076, "raw_average_key_size": 21, "raw_value_size": 1456420, "raw_average_value_size": 1711, "num_data_blocks": 124, "num_entries": 851, "num_filter_entries": 851, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003453, "oldest_key_time": 1769003453, "file_creation_time": 1769003608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17950 microseconds, and 8483 cpu microseconds.
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.306579) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1476589 bytes OK
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.306599) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.308094) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.308115) EVENT_LOG_v1 {"time_micros": 1769003608308109, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.308139) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2475581, prev total WAL file size 2475581, number of live WAL files 2.
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.309155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1441KB)], [20(7595KB)]
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608309239, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9254332, "oldest_snapshot_seqno": -1}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3416 keys, 7252103 bytes, temperature: kUnknown
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608371471, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7252103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7226063, "index_size": 16394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81552, "raw_average_key_size": 23, "raw_value_size": 7161202, "raw_average_value_size": 2096, "num_data_blocks": 726, "num_entries": 3416, "num_filter_entries": 3416, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.371773) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7252103 bytes
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.375172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.4 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.4 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3856, records dropped: 440 output_compression: NoCompression
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.375196) EVENT_LOG_v1 {"time_micros": 1769003608375184, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62364, "compaction_time_cpu_micros": 19747, "output_level": 6, "num_output_files": 1, "total_output_size": 7252103, "num_input_records": 3856, "num_output_records": 3416, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608375530, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003608377021, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.309020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:53:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:53:28.377105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:53:28 compute-0 sudo[135511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ameaffzcdciiexrcztevrrhmxumxuzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003608.22535-91-87590387961800/AnsiballZ_file.py'
Jan 21 13:53:28 compute-0 sudo[135511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:28 compute-0 python3.9[135513]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:28 compute-0 sudo[135511]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:29 compute-0 ceph-mon[75031]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:29 compute-0 sudo[135663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmvgnmzdqvncehdegqocgmmdxowusvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003608.885288-99-56757687382839/AnsiballZ_stat.py'
Jan 21 13:53:29 compute-0 sudo[135663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:29 compute-0 python3.9[135665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:29 compute-0 sudo[135663]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:29 compute-0 sudo[135741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxzsokiagzdlozeeqbpvnplprqohjouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003608.885288-99-56757687382839/AnsiballZ_file.py'
Jan 21 13:53:29 compute-0 sudo[135741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:30 compute-0 python3.9[135743]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:30 compute-0 sudo[135741]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:30 compute-0 sudo[135893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxoebnhcitiumpprdajwsjcmvrgxrrtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003610.2451594-111-155889945787769/AnsiballZ_stat.py'
Jan 21 13:53:30 compute-0 sudo[135893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:30 compute-0 python3.9[135895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:30 compute-0 sudo[135893]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:31 compute-0 sudo[135971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyeovnqciwyersuvomeacfvryvzlnbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003610.2451594-111-155889945787769/AnsiballZ_file.py'
Jan 21 13:53:31 compute-0 sudo[135971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:31 compute-0 python3.9[135973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yw5b2sxy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:31 compute-0 sudo[135971]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:31 compute-0 ceph-mon[75031]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:31 compute-0 sudo[136123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upgmqkaljbgzguvtjgnpqsivqpsfdtyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003611.3863857-123-237554824132932/AnsiballZ_stat.py'
Jan 21 13:53:31 compute-0 sudo[136123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:31 compute-0 python3.9[136125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:31 compute-0 sudo[136123]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:32 compute-0 sudo[136201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgiecdifoaozonyppookxbscshxpgcsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003611.3863857-123-237554824132932/AnsiballZ_file.py'
Jan 21 13:53:32 compute-0 sudo[136201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:32 compute-0 python3.9[136203]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:32 compute-0 sudo[136201]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:33 compute-0 sudo[136353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fswomrzwyqnwnunkbqymgrnyrawpfxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003612.6492429-136-280965265585939/AnsiballZ_command.py'
Jan 21 13:53:33 compute-0 sudo[136353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:33 compute-0 python3.9[136355]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:33 compute-0 sudo[136353]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:33 compute-0 ceph-mon[75031]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:34 compute-0 sudo[136506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcrvdgrxgcjolrathrrzlpkagcfodsim ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003613.5136585-144-160181139263136/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 13:53:34 compute-0 sudo[136506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:34 compute-0 python3[136508]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 13:53:34 compute-0 sudo[136506]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:34 compute-0 sudo[136658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwwgapwuxmxhcfzkougnsdmibmbaffny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003614.4467416-152-96310264957561/AnsiballZ_stat.py'
Jan 21 13:53:34 compute-0 sudo[136658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:35 compute-0 python3.9[136660]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:35 compute-0 sudo[136658]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:35 compute-0 ceph-mon[75031]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:35 compute-0 sudo[136783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjtgzkfaereghzirgjfvxvnezhtzrwmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003614.4467416-152-96310264957561/AnsiballZ_copy.py'
Jan 21 13:53:35 compute-0 sudo[136783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:35 compute-0 python3.9[136785]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003614.4467416-152-96310264957561/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:35 compute-0 sudo[136783]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:36 compute-0 sudo[136935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeensqpsqyhpqietqsalsssyyyrlntks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003616.054529-167-194603325785554/AnsiballZ_stat.py'
Jan 21 13:53:36 compute-0 sudo[136935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:36 compute-0 python3.9[136937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:36 compute-0 sudo[136935]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:37 compute-0 sudo[137060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fubswbnxnmrytgswakzudqzcntkbezwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003616.054529-167-194603325785554/AnsiballZ_copy.py'
Jan 21 13:53:37 compute-0 sudo[137060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:37 compute-0 python3.9[137062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003616.054529-167-194603325785554/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:37 compute-0 sudo[137060]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:37 compute-0 ceph-mon[75031]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:37 compute-0 sudo[137212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bboyopojnbcfftakznkhthikbyuueldu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003617.4118094-182-114681095382234/AnsiballZ_stat.py'
Jan 21 13:53:37 compute-0 sudo[137212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:37 compute-0 python3.9[137214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:37 compute-0 sudo[137212]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:38 compute-0 sudo[137337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bihkjkvbnzzetkexuhonzjrvkfyxxvuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003617.4118094-182-114681095382234/AnsiballZ_copy.py'
Jan 21 13:53:38 compute-0 sudo[137337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:38 compute-0 python3.9[137339]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003617.4118094-182-114681095382234/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:38 compute-0 sudo[137337]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:38 compute-0 ceph-mon[75031]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:39 compute-0 sudo[137489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeyyzfjcyamjvvnjgjahtydutwtbtwzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003618.7606864-197-103150635357382/AnsiballZ_stat.py'
Jan 21 13:53:39 compute-0 sudo[137489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:39 compute-0 python3.9[137491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:39 compute-0 sudo[137489]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:53:39
Jan 21 13:53:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:53:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:53:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta']
Jan 21 13:53:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:53:39 compute-0 sudo[137614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eooamjllovuyxlfzirchstqubqerpgjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003618.7606864-197-103150635357382/AnsiballZ_copy.py'
Jan 21 13:53:39 compute-0 sudo[137614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:39 compute-0 python3.9[137616]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003618.7606864-197-103150635357382/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:39 compute-0 sudo[137614]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:40 compute-0 sudo[137766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eratcqrvoyrmawxmxgfzjreyvoezjnic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003620.0412009-212-196556252261035/AnsiballZ_stat.py'
Jan 21 13:53:40 compute-0 sudo[137766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:40 compute-0 python3.9[137768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:40 compute-0 sudo[137766]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:40 compute-0 ceph-mon[75031]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:53:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:53:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:53:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:53:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:53:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:53:41 compute-0 sudo[137891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivubaldkaxcgaqsrskgesssceeawtwee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003620.0412009-212-196556252261035/AnsiballZ_copy.py'
Jan 21 13:53:41 compute-0 sudo[137891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:53:41 compute-0 python3.9[137893]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003620.0412009-212-196556252261035/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:41 compute-0 sudo[137891]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:41 compute-0 sudo[138043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpeaunojenprvzrvqnygzgvieucmooc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003621.4255857-227-41168936419647/AnsiballZ_file.py'
Jan 21 13:53:41 compute-0 sudo[138043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:41 compute-0 python3.9[138045]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:41 compute-0 sudo[138043]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:42 compute-0 sudo[138195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dccwbjurfwcbipfgbdmqkdoweabtxtrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003622.104766-235-7645040454999/AnsiballZ_command.py'
Jan 21 13:53:42 compute-0 sudo[138195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:42 compute-0 python3.9[138197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:42 compute-0 sudo[138195]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:42 compute-0 ceph-mon[75031]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:43 compute-0 sudo[138350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awhuhifqbislehpjngacmcmaeipqyzeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003622.761324-243-211373710352137/AnsiballZ_blockinfile.py'
Jan 21 13:53:43 compute-0 sudo[138350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:43 compute-0 python3.9[138352]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:43 compute-0 sudo[138350]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:43 compute-0 sudo[138502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzndceafgeuedhhnfyfwoohxlaqlkteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003623.6702206-252-197302645576019/AnsiballZ_command.py'
Jan 21 13:53:43 compute-0 sudo[138502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:44 compute-0 python3.9[138504]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:44 compute-0 sudo[138502]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:44 compute-0 sudo[138655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyogvmqswsgivjkgzitajlkjayqfizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003624.4085612-260-272593762727400/AnsiballZ_stat.py'
Jan 21 13:53:44 compute-0 sudo[138655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:44 compute-0 ceph-mon[75031]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:44 compute-0 python3.9[138657]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:53:44 compute-0 sudo[138655]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:45 compute-0 sudo[138809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwdyqgdpmxvpvdjinkgznbwpmcyhtcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003625.127971-268-37022326081811/AnsiballZ_command.py'
Jan 21 13:53:45 compute-0 sudo[138809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:45 compute-0 python3.9[138811]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:45 compute-0 sudo[138809]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:46 compute-0 sudo[138964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-easnwyntkupatjaunzptokgemhehsymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003625.7956343-276-210390359357489/AnsiballZ_file.py'
Jan 21 13:53:46 compute-0 sudo[138964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:46 compute-0 python3.9[138966]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:46 compute-0 sudo[138964]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:47 compute-0 ceph-mon[75031]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:47 compute-0 python3.9[139116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:53:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:48 compute-0 sudo[139267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqskrxsmmneowfbuueaghgwzjyzhaknn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003628.0503833-316-102863741062100/AnsiballZ_command.py'
Jan 21 13:53:48 compute-0 sudo[139267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:48 compute-0 python3.9[139269]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:48 compute-0 ovs-vsctl[139270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 21 13:53:48 compute-0 sudo[139267]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:49 compute-0 ceph-mon[75031]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:49 compute-0 sudo[139420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llrlcrjpzakqbtdylvwskuqgplglqtip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003628.7668757-325-216798249411865/AnsiballZ_command.py'
Jan 21 13:53:49 compute-0 sudo[139420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:49 compute-0 python3.9[139422]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:49 compute-0 sudo[139420]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:49 compute-0 sudo[139575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujqcfgkutghypnyhmzwciqqebjzbmwfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003629.4865758-333-147225691690693/AnsiballZ_command.py'
Jan 21 13:53:49 compute-0 sudo[139575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:49 compute-0 python3.9[139577]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:53:49 compute-0 ovs-vsctl[139578]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 21 13:53:50 compute-0 sudo[139575]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:53:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:53:50 compute-0 python3.9[139728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:53:51 compute-0 ceph-mon[75031]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:51 compute-0 sudo[139880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqubnjjzxshtsesyjljgdylrerfpqkgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003630.891485-350-61975457909155/AnsiballZ_file.py'
Jan 21 13:53:51 compute-0 sudo[139880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:51 compute-0 python3.9[139882]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:53:51 compute-0 sudo[139880]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:51 compute-0 sudo[140032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mohqyggakjoswycgdnnvjxkbykqwhsxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003631.5957327-358-136543406117028/AnsiballZ_stat.py'
Jan 21 13:53:51 compute-0 sudo[140032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:52 compute-0 python3.9[140034]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:52 compute-0 sudo[140032]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:52 compute-0 sudo[140110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jclgnbxtpvmjpdqriicqfpjzwpjvtror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003631.5957327-358-136543406117028/AnsiballZ_file.py'
Jan 21 13:53:52 compute-0 sudo[140110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:52 compute-0 python3.9[140112]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:53:52 compute-0 sudo[140110]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:53 compute-0 ceph-mon[75031]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:53 compute-0 sudo[140262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xanvzxcphgjuoiyyfemazgaqvvtgzkde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003633.0714853-358-232285028459274/AnsiballZ_stat.py'
Jan 21 13:53:53 compute-0 sudo[140262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:53 compute-0 python3.9[140264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:53 compute-0 sudo[140262]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:53 compute-0 sudo[140340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elncckorsejhkakvgkxzwddnfqosgnzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003633.0714853-358-232285028459274/AnsiballZ_file.py'
Jan 21 13:53:53 compute-0 sudo[140340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:54 compute-0 python3.9[140342]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:53:54 compute-0 sudo[140340]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:54 compute-0 sudo[140492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jytvmgccjvfrqlnhezutumaikhmvpqpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003634.2367702-381-99960411104473/AnsiballZ_file.py'
Jan 21 13:53:54 compute-0 sudo[140492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:54 compute-0 python3.9[140494]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:54 compute-0 sudo[140492]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:55 compute-0 ceph-mon[75031]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:55 compute-0 sudo[140644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plukalxdgtxeapsrqcpberpwcmfvupdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003634.9266508-389-223593110745970/AnsiballZ_stat.py'
Jan 21 13:53:55 compute-0 sudo[140644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:55 compute-0 python3.9[140646]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:55 compute-0 sudo[140644]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:55 compute-0 sudo[140722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfbswcutltvhspjotxjsjxylvhbxvwrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003634.9266508-389-223593110745970/AnsiballZ_file.py'
Jan 21 13:53:55 compute-0 sudo[140722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:55 compute-0 python3.9[140724]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:55 compute-0 sudo[140722]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:56 compute-0 sudo[140874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otwttikegidllvsmxuhykftuapvlatby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003636.0918965-401-118330786543973/AnsiballZ_stat.py'
Jan 21 13:53:56 compute-0 sudo[140874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:56 compute-0 python3.9[140876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:56 compute-0 sudo[140874]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:56 compute-0 sudo[140952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qweexitcycslwwzxzmaozrvjsaqzfgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003636.0918965-401-118330786543973/AnsiballZ_file.py'
Jan 21 13:53:56 compute-0 sudo[140952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:57 compute-0 python3.9[140954]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:57 compute-0 sudo[140952]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:57 compute-0 ceph-mon[75031]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:57 compute-0 sudo[141104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcvxkwfekctpsuwbbglltnxfskeaajye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003637.220567-413-159675068367046/AnsiballZ_systemd.py'
Jan 21 13:53:57 compute-0 sudo[141104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:57 compute-0 python3.9[141106]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:53:57 compute-0 systemd[1]: Reloading.
Jan 21 13:53:57 compute-0 systemd-rc-local-generator[141135]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:53:57 compute-0 systemd-sysv-generator[141139]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:53:58 compute-0 sudo[141104]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:53:58 compute-0 sudo[141294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceubvtdrcgqqkbcmxfejxobeclsyuhuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003638.3395498-421-270691264805234/AnsiballZ_stat.py'
Jan 21 13:53:58 compute-0 sudo[141294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:58 compute-0 python3.9[141296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:53:58 compute-0 sudo[141294]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:59 compute-0 ceph-mon[75031]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:59 compute-0 sudo[141372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfytxkxtwfuuqqnovyznonrzuezvegmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003638.3395498-421-270691264805234/AnsiballZ_file.py'
Jan 21 13:53:59 compute-0 sudo[141372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:53:59 compute-0 python3.9[141374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:53:59 compute-0 sudo[141372]: pam_unix(sudo:session): session closed for user root
Jan 21 13:53:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:53:59 compute-0 sudo[141524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvrdfiopcxgrcmzvrwcavokcpreuimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003639.5240383-433-141377398549791/AnsiballZ_stat.py'
Jan 21 13:53:59 compute-0 sudo[141524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:00 compute-0 python3.9[141526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:00 compute-0 sudo[141524]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:00 compute-0 sudo[141602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbatwwrfdjtacvqivxsbobzmorujsomq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003639.5240383-433-141377398549791/AnsiballZ_file.py'
Jan 21 13:54:00 compute-0 sudo[141602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:00 compute-0 python3.9[141604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:00 compute-0 sudo[141602]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:01 compute-0 sudo[141754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saqawbpzslknqntotzybkhhonwkqrodq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003640.7280326-445-221843154949238/AnsiballZ_systemd.py'
Jan 21 13:54:01 compute-0 sudo[141754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:01 compute-0 ceph-mon[75031]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:01 compute-0 python3.9[141756]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:54:01 compute-0 systemd[1]: Reloading.
Jan 21 13:54:01 compute-0 systemd-rc-local-generator[141783]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:54:01 compute-0 systemd-sysv-generator[141788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:54:01 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 13:54:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 13:54:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 13:54:01 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 13:54:01 compute-0 sudo[141754]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:02 compute-0 sudo[141948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdzcsdozygqastzjrxflgliwxsnnbpgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003641.9859834-455-216503330524277/AnsiballZ_file.py'
Jan 21 13:54:02 compute-0 sudo[141948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:02 compute-0 python3.9[141950]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:02 compute-0 sudo[141948]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:03 compute-0 sudo[142100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmxvznvmshfyiyggpfpfvvlxwukisfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003642.7039864-463-29500110450385/AnsiballZ_stat.py'
Jan 21 13:54:03 compute-0 sudo[142100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:03 compute-0 ceph-mon[75031]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:03 compute-0 python3.9[142102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:03 compute-0 sudo[142100]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:03 compute-0 sudo[142223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugltpalbnglctnpnbneylehacqmnszmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003642.7039864-463-29500110450385/AnsiballZ_copy.py'
Jan 21 13:54:03 compute-0 sudo[142223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:03 compute-0 python3.9[142225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003642.7039864-463-29500110450385/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:03 compute-0 sudo[142223]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:04 compute-0 sudo[142375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozgndzrdtxmsafuarqsqflhxyqnlqltb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003644.2516682-480-87958800135560/AnsiballZ_file.py'
Jan 21 13:54:04 compute-0 sudo[142375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:04 compute-0 python3.9[142377]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:04 compute-0 sudo[142375]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:05 compute-0 ceph-mon[75031]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:05 compute-0 sudo[142527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgeahepntdmnkugvmazwtwurelqfwvpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003644.9587123-488-18525648445123/AnsiballZ_file.py'
Jan 21 13:54:05 compute-0 sudo[142527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:05 compute-0 python3.9[142529]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:05 compute-0 sudo[142527]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:05 compute-0 sudo[142679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcmsdzzblsiunizzalremrfdgzptcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003645.7288966-496-220393008982330/AnsiballZ_stat.py'
Jan 21 13:54:05 compute-0 sudo[142679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:06 compute-0 python3.9[142681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:06 compute-0 sudo[142679]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:06 compute-0 sudo[142802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofroeaqilvilnudnikfvbwywquafxfrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003645.7288966-496-220393008982330/AnsiballZ_copy.py'
Jan 21 13:54:06 compute-0 sudo[142802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:06 compute-0 python3.9[142804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003645.7288966-496-220393008982330/.source.json _original_basename=.sc0j3sg2 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:06 compute-0 sudo[142802]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:07 compute-0 ceph-mon[75031]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:07 compute-0 python3.9[142954]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:09 compute-0 ceph-mon[75031]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:09 compute-0 sudo[143375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqhgevorhczrkmpilcfxdhnaihkyezya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003648.8962898-536-9791144663400/AnsiballZ_container_config_data.py'
Jan 21 13:54:09 compute-0 sudo[143375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:09 compute-0 python3.9[143377]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 21 13:54:09 compute-0 sudo[143375]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:10 compute-0 sudo[143527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfultaplnbrvgvlkaszvkkbzmjhoyxwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003649.8302135-547-280467716195008/AnsiballZ_container_config_hash.py'
Jan 21 13:54:10 compute-0 sudo[143527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:10 compute-0 python3.9[143529]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 13:54:10 compute-0 sudo[143527]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:54:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:54:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:54:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:54:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:54:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:54:11 compute-0 ceph-mon[75031]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:11 compute-0 sudo[143679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzcsyjgzptbvcmldefvwsbiuuiyhhzau ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003650.7919028-557-263536350929707/AnsiballZ_edpm_container_manage.py'
Jan 21 13:54:11 compute-0 sudo[143679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:11 compute-0 python3[143681]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 13:54:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:11 compute-0 sudo[143706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:54:11 compute-0 sudo[143706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:11 compute-0 sudo[143706]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:11 compute-0 sudo[143731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:54:11 compute-0 sudo[143731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:12 compute-0 sudo[143731]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:54:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:54:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:54:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:54:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:54:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:54:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:54:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:54:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:54:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:54:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:54:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:54:12 compute-0 sudo[143805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:54:12 compute-0 sudo[143805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:12 compute-0 sudo[143805]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:12 compute-0 sudo[143832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:54:12 compute-0 sudo[143832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.095664756 +0000 UTC m=+0.052534621 container create a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:54:13 compute-0 systemd[1]: Started libpod-conmon-a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d.scope.
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.066728207 +0000 UTC m=+0.023598092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:54:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.196403613 +0000 UTC m=+0.153273498 container init a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.20425881 +0000 UTC m=+0.161128675 container start a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:54:13 compute-0 thirsty_pare[143886]: 167 167
Jan 21 13:54:13 compute-0 systemd[1]: libpod-a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d.scope: Deactivated successfully.
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.210453437 +0000 UTC m=+0.167323322 container attach a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.211716998 +0000 UTC m=+0.168586863 container died a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 13:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ec9c70bbbaa63249967bce316513c759d426101fd4cf0d7d531bee23d807210-merged.mount: Deactivated successfully.
Jan 21 13:54:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:13 compute-0 podman[143870]: 2026-01-21 13:54:13.312724831 +0000 UTC m=+0.269594696 container remove a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 21 13:54:13 compute-0 systemd[1]: libpod-conmon-a8824bf55ecd488fa64f3e0f718ff656824933ccc325d604385dc79c087f0d7d.scope: Deactivated successfully.
Jan 21 13:54:13 compute-0 podman[143911]: 2026-01-21 13:54:13.449162107 +0000 UTC m=+0.029745259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:54:13 compute-0 ceph-mon[75031]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:54:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:54:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:54:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:54:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:54:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:54:13 compute-0 podman[143911]: 2026-01-21 13:54:13.551289776 +0000 UTC m=+0.131872908 container create 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:54:13 compute-0 systemd[1]: Started libpod-conmon-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope.
Jan 21 13:54:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:13 compute-0 podman[143911]: 2026-01-21 13:54:13.646472871 +0000 UTC m=+0.227056013 container init 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:54:13 compute-0 podman[143911]: 2026-01-21 13:54:13.656997532 +0000 UTC m=+0.237580664 container start 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:54:13 compute-0 podman[143911]: 2026-01-21 13:54:13.660714091 +0000 UTC m=+0.241297233 container attach 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:54:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:14 compute-0 stupefied_pare[143927]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:54:14 compute-0 stupefied_pare[143927]: --> All data devices are unavailable
Jan 21 13:54:14 compute-0 systemd[1]: libpod-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope: Deactivated successfully.
Jan 21 13:54:14 compute-0 conmon[143927]: conmon 32132a26dd1c888b50bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope/container/memory.events
Jan 21 13:54:14 compute-0 podman[143911]: 2026-01-21 13:54:14.17304424 +0000 UTC m=+0.753627382 container died 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 21 13:54:15 compute-0 ceph-mon[75031]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-388ca55e5c5e310e86d219f52e461ea2b2ac4faba79b64b85ba2b480ee570d98-merged.mount: Deactivated successfully.
Jan 21 13:54:17 compute-0 podman[143911]: 2026-01-21 13:54:17.05119128 +0000 UTC m=+3.631774412 container remove 32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 13:54:17 compute-0 systemd[1]: libpod-conmon-32132a26dd1c888b50bff253d066d5c81737cfdf9bcc412011769e6754d9d99e.scope: Deactivated successfully.
Jan 21 13:54:17 compute-0 podman[143693]: 2026-01-21 13:54:17.077692831 +0000 UTC m=+5.439354840 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 13:54:17 compute-0 sudo[143832]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:17 compute-0 sudo[144030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:54:17 compute-0 sudo[144030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:17 compute-0 sudo[144030]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:17 compute-0 sudo[144072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:54:17 compute-0 sudo[144072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:17 compute-0 podman[144071]: 2026-01-21 13:54:17.215767366 +0000 UTC m=+0.044933910 container create 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 13:54:17 compute-0 podman[144071]: 2026-01-21 13:54:17.19197454 +0000 UTC m=+0.021141104 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 13:54:17 compute-0 python3[143681]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 21 13:54:17 compute-0 sudo[143679]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.471737316 +0000 UTC m=+0.041138610 container create 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 21 13:54:17 compute-0 systemd[1]: Started libpod-conmon-047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e.scope.
Jan 21 13:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.545123513 +0000 UTC m=+0.114524817 container init 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.451587377 +0000 UTC m=+0.020988701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.553587024 +0000 UTC m=+0.122988318 container start 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.556672687 +0000 UTC m=+0.126073981 container attach 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:54:17 compute-0 wonderful_wilson[144214]: 167 167
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.559864253 +0000 UTC m=+0.129265537 container died 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 13:54:17 compute-0 systemd[1]: libpod-047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e.scope: Deactivated successfully.
Jan 21 13:54:17 compute-0 ceph-mon[75031]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5322174228702c682b6693ce7c653722ab0c22711d043f19e9a61363941242f-merged.mount: Deactivated successfully.
Jan 21 13:54:17 compute-0 podman[144169]: 2026-01-21 13:54:17.603681796 +0000 UTC m=+0.173083090 container remove 047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:54:17 compute-0 systemd[1]: libpod-conmon-047ab6dd38376dea921f014da0fa3c6a7032bd407589097f088a7adfac60d91e.scope: Deactivated successfully.
Jan 21 13:54:17 compute-0 sudo[144340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzsaaioqblohpstbzyezhslfegzoposv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003657.4898868-565-89763239400351/AnsiballZ_stat.py'
Jan 21 13:54:17 compute-0 sudo[144340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:17 compute-0 podman[144318]: 2026-01-21 13:54:17.759489143 +0000 UTC m=+0.041515049 container create 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:54:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:17 compute-0 systemd[1]: Started libpod-conmon-64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c.scope.
Jan 21 13:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:17 compute-0 podman[144318]: 2026-01-21 13:54:17.740215604 +0000 UTC m=+0.022241530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:54:17 compute-0 python3.9[144348]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:54:17 compute-0 sudo[144340]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:18 compute-0 podman[144318]: 2026-01-21 13:54:18.030481881 +0000 UTC m=+0.312507797 container init 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:54:18 compute-0 podman[144318]: 2026-01-21 13:54:18.037657701 +0000 UTC m=+0.319683607 container start 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:54:18 compute-0 podman[144318]: 2026-01-21 13:54:18.229940386 +0000 UTC m=+0.511966292 container attach 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:54:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]: {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:     "0": [
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:         {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "devices": [
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "/dev/loop3"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             ],
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_name": "ceph_lv0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_size": "21470642176",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "name": "ceph_lv0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "tags": {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cluster_name": "ceph",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.crush_device_class": "",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.encrypted": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.objectstore": "bluestore",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osd_id": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.type": "block",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.vdo": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.with_tpm": "0"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             },
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "type": "block",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "vg_name": "ceph_vg0"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:         }
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:     ],
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:     "1": [
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:         {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "devices": [
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "/dev/loop4"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             ],
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_name": "ceph_lv1",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_size": "21470642176",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "name": "ceph_lv1",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "tags": {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cluster_name": "ceph",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.crush_device_class": "",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.encrypted": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.objectstore": "bluestore",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osd_id": "1",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.type": "block",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.vdo": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.with_tpm": "0"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             },
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "type": "block",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "vg_name": "ceph_vg1"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:         }
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:     ],
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:     "2": [
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:         {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "devices": [
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "/dev/loop5"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             ],
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_name": "ceph_lv2",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_size": "21470642176",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "name": "ceph_lv2",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "tags": {
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.cluster_name": "ceph",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.crush_device_class": "",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.encrypted": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.objectstore": "bluestore",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osd_id": "2",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.type": "block",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.vdo": "0",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:                 "ceph.with_tpm": "0"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             },
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "type": "block",
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:             "vg_name": "ceph_vg2"
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:         }
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]:     ]
Jan 21 13:54:18 compute-0 affectionate_merkle[144352]: }
Jan 21 13:54:18 compute-0 systemd[1]: libpod-64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c.scope: Deactivated successfully.
Jan 21 13:54:18 compute-0 podman[144318]: 2026-01-21 13:54:18.375249584 +0000 UTC m=+0.657275510 container died 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 13:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b76cbf083bba3e1d3941744e18de2bf9e2ebf9854abc6552eb9fe3a31631ecb-merged.mount: Deactivated successfully.
Jan 21 13:54:18 compute-0 podman[144318]: 2026-01-21 13:54:18.463964344 +0000 UTC m=+0.745990260 container remove 64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 13:54:18 compute-0 systemd[1]: libpod-conmon-64dabb514cf0b380188ea10ed962c3f6c6fc70ee1e539dec921bb3d00ede418c.scope: Deactivated successfully.
Jan 21 13:54:18 compute-0 sudo[144526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqkatantzpxhzcqarcqkguqwdmljwtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003658.2013223-574-149172554452666/AnsiballZ_file.py'
Jan 21 13:54:18 compute-0 sudo[144072]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:18 compute-0 sudo[144526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:18 compute-0 sudo[144529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:54:18 compute-0 sudo[144529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:18 compute-0 sudo[144529]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:18 compute-0 sudo[144554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:54:18 compute-0 sudo[144554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:18 compute-0 python3.9[144528]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:18 compute-0 sudo[144526]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:18 compute-0 podman[144632]: 2026-01-21 13:54:18.888543706 +0000 UTC m=+0.041781884 container create b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:54:18 compute-0 systemd[1]: Started libpod-conmon-b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059.scope.
Jan 21 13:54:18 compute-0 sudo[144678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyfwitnqkxbtupbcfbndugpfcpqhisem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003658.2013223-574-149172554452666/AnsiballZ_stat.py'
Jan 21 13:54:18 compute-0 sudo[144678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:18 compute-0 podman[144632]: 2026-01-21 13:54:18.965911127 +0000 UTC m=+0.119149325 container init b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:54:18 compute-0 podman[144632]: 2026-01-21 13:54:18.871957492 +0000 UTC m=+0.025195700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:54:18 compute-0 podman[144632]: 2026-01-21 13:54:18.97360089 +0000 UTC m=+0.126839068 container start b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:54:18 compute-0 podman[144632]: 2026-01-21 13:54:18.977342499 +0000 UTC m=+0.130580697 container attach b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 13:54:18 compute-0 bold_zhukovsky[144680]: 167 167
Jan 21 13:54:18 compute-0 systemd[1]: libpod-b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059.scope: Deactivated successfully.
Jan 21 13:54:18 compute-0 podman[144632]: 2026-01-21 13:54:18.979361147 +0000 UTC m=+0.132599345 container died b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-886cbbc55f812f8bbc6465ac736ede54daa27b17ee7df377c7d33f3bad261145-merged.mount: Deactivated successfully.
Jan 21 13:54:19 compute-0 podman[144632]: 2026-01-21 13:54:19.028400614 +0000 UTC m=+0.181638792 container remove b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:54:19 compute-0 systemd[1]: libpod-conmon-b0b61b3ba75aec1c649897d88cb43bcff776a5aff93096c48905ce657bc4c059.scope: Deactivated successfully.
Jan 21 13:54:19 compute-0 python3.9[144682]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:54:19 compute-0 sudo[144678]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:19 compute-0 podman[144704]: 2026-01-21 13:54:19.167966845 +0000 UTC m=+0.042837251 container create 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 13:54:19 compute-0 systemd[1]: Started libpod-conmon-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope.
Jan 21 13:54:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:19 compute-0 podman[144704]: 2026-01-21 13:54:19.144993718 +0000 UTC m=+0.019864144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:19 compute-0 podman[144704]: 2026-01-21 13:54:19.273441924 +0000 UTC m=+0.148312370 container init 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:54:19 compute-0 podman[144704]: 2026-01-21 13:54:19.280640215 +0000 UTC m=+0.155510621 container start 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 13:54:19 compute-0 podman[144704]: 2026-01-21 13:54:19.286397783 +0000 UTC m=+0.161268189 container attach 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:54:19 compute-0 ceph-mon[75031]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:19 compute-0 sudo[144889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxztzgvcumodmgbqoxsgvntaajgezsor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003659.2097723-574-115651427355373/AnsiballZ_copy.py'
Jan 21 13:54:19 compute-0 sudo[144889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:19 compute-0 python3.9[144895]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003659.2097723-574-115651427355373/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:19 compute-0 sudo[144889]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:19 compute-0 lvm[144972]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:54:19 compute-0 lvm[144971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:54:19 compute-0 lvm[144971]: VG ceph_vg0 finished
Jan 21 13:54:19 compute-0 lvm[144972]: VG ceph_vg1 finished
Jan 21 13:54:20 compute-0 lvm[144976]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:54:20 compute-0 lvm[144976]: VG ceph_vg2 finished
Jan 21 13:54:20 compute-0 angry_saha[144743]: {}
Jan 21 13:54:20 compute-0 systemd[1]: libpod-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope: Deactivated successfully.
Jan 21 13:54:20 compute-0 systemd[1]: libpod-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope: Consumed 1.378s CPU time.
Jan 21 13:54:20 compute-0 podman[144704]: 2026-01-21 13:54:20.127646268 +0000 UTC m=+1.002516674 container died 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2bbb8404053954383bbc7c1a29343dc7ac6b9135c28f6c8b1f5a06a205059d1-merged.mount: Deactivated successfully.
Jan 21 13:54:20 compute-0 sudo[145036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslsccworsgvxmjupkbnfffjjmxydowm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003659.2097723-574-115651427355373/AnsiballZ_systemd.py'
Jan 21 13:54:20 compute-0 sudo[145036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:20 compute-0 podman[144704]: 2026-01-21 13:54:20.193318051 +0000 UTC m=+1.068188457 container remove 30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:54:20 compute-0 systemd[1]: libpod-conmon-30538f27739f72fe91227f563305d6588a35afae5b4b8373ed80b907dd3c74ac.scope: Deactivated successfully.
Jan 21 13:54:20 compute-0 sudo[144554]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:54:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:54:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.285040) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660285072, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 635, "num_deletes": 251, "total_data_size": 728583, "memory_usage": 740328, "flush_reason": "Manual Compaction"}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660292069, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 721962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9076, "largest_seqno": 9710, "table_properties": {"data_size": 718652, "index_size": 1218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7388, "raw_average_key_size": 18, "raw_value_size": 711950, "raw_average_value_size": 1771, "num_data_blocks": 58, "num_entries": 402, "num_filter_entries": 402, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003609, "oldest_key_time": 1769003609, "file_creation_time": 1769003660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7073 microseconds, and 3188 cpu microseconds.
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.292112) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 721962 bytes OK
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.292129) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.295698) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.295722) EVENT_LOG_v1 {"time_micros": 1769003660295717, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.295739) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 725183, prev total WAL file size 765631, number of live WAL files 2.
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.296281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(705KB)], [23(7082KB)]
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660296383, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7974065, "oldest_snapshot_seqno": -1}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:54:20 compute-0 sudo[145046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:54:20 compute-0 sudo[145046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:54:20 compute-0 sudo[145046]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:20 compute-0 python3.9[145043]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 13:54:20 compute-0 systemd[1]: Reloading.
Jan 21 13:54:20 compute-0 systemd-rc-local-generator[145097]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:54:20 compute-0 systemd-sysv-generator[145100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3305 keys, 6170317 bytes, temperature: kUnknown
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660590818, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6170317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6146527, "index_size": 14401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80064, "raw_average_key_size": 24, "raw_value_size": 6085100, "raw_average_value_size": 1841, "num_data_blocks": 627, "num_entries": 3305, "num_filter_entries": 3305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.592293) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6170317 bytes
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.595012) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.0 rd, 20.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 6.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(19.6) write-amplify(8.5) OK, records in: 3818, records dropped: 513 output_compression: NoCompression
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.595038) EVENT_LOG_v1 {"time_micros": 1769003660595024, "job": 8, "event": "compaction_finished", "compaction_time_micros": 295633, "compaction_time_cpu_micros": 18448, "output_level": 6, "num_output_files": 1, "total_output_size": 6170317, "num_input_records": 3818, "num_output_records": 3305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660595326, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003660596475, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.296115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:54:20 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:54:20.596579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:54:20 compute-0 sudo[145036]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:54:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2121 writes, 9581 keys, 2121 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2121 writes, 9581 keys, 2121 commit groups, 1.0 writes per commit group, ingest: 12.52 MB, 0.02 MB/s
                                           Interval WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.5      0.11              0.04         4    0.027       0      0       0.0       0.0
                                             L6      1/0    5.88 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     40.2     34.0      0.60              0.07         3    0.198     10K   1243       0.0       0.0
                                            Sum      1/0    5.88 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     34.0     42.4      0.70              0.11         7    0.100     10K   1243       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     34.3     42.6      0.70              0.11         6    0.116     10K   1243       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     40.2     34.0      0.60              0.07         3    0.198     10K   1243       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.3      0.10              0.04         3    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.7 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 308.00 MB usage: 960.53 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(54,841.66 KB,0.26686%) FilterBlock(8,38.05 KB,0.0120634%) IndexBlock(8,80.83 KB,0.0256278%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 13:54:21 compute-0 sudo[145179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbnzixixrnxrozijwkcflygdowecewpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003659.2097723-574-115651427355373/AnsiballZ_systemd.py'
Jan 21 13:54:21 compute-0 sudo[145179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:21 compute-0 ceph-mon[75031]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:54:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:54:21 compute-0 python3.9[145181]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:54:21 compute-0 systemd[1]: Reloading.
Jan 21 13:54:21 compute-0 systemd-rc-local-generator[145210]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:54:21 compute-0 systemd-sysv-generator[145214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:54:21 compute-0 systemd[1]: Starting ovn_controller container...
Jan 21 13:54:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74c60d16e88face7e778cd90f2ac050fe7c8c6342cddbdd87a33bfc68cf6d9ff/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 21 13:54:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488.
Jan 21 13:54:21 compute-0 podman[145221]: 2026-01-21 13:54:21.855323624 +0000 UTC m=+0.129714526 container init 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 21 13:54:21 compute-0 ovn_controller[145237]: + sudo -E kolla_set_configs
Jan 21 13:54:21 compute-0 podman[145221]: 2026-01-21 13:54:21.888625247 +0000 UTC m=+0.163016139 container start 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 13:54:21 compute-0 edpm-start-podman-container[145221]: ovn_controller
Jan 21 13:54:21 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 21 13:54:21 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 21 13:54:21 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 21 13:54:21 compute-0 podman[145244]: 2026-01-21 13:54:21.96650198 +0000 UTC m=+0.068692405 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 21 13:54:21 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 21 13:54:21 compute-0 systemd[1]: 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488-3c5d5bb8632e499f.service: Main process exited, code=exited, status=1/FAILURE
Jan 21 13:54:21 compute-0 systemd[1]: 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488-3c5d5bb8632e499f.service: Failed with result 'exit-code'.
Jan 21 13:54:21 compute-0 edpm-start-podman-container[145220]: Creating additional drop-in dependency for "ovn_controller" (65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488)
Jan 21 13:54:21 compute-0 systemd[145276]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 21 13:54:22 compute-0 systemd[1]: Reloading.
Jan 21 13:54:22 compute-0 systemd-rc-local-generator[145322]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:54:22 compute-0 systemd-sysv-generator[145325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:54:22 compute-0 systemd[145276]: Queued start job for default target Main User Target.
Jan 21 13:54:22 compute-0 systemd[145276]: Created slice User Application Slice.
Jan 21 13:54:22 compute-0 systemd[145276]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 21 13:54:22 compute-0 systemd[145276]: Started Daily Cleanup of User's Temporary Directories.
Jan 21 13:54:22 compute-0 systemd[145276]: Reached target Paths.
Jan 21 13:54:22 compute-0 systemd[145276]: Reached target Timers.
Jan 21 13:54:22 compute-0 systemd[145276]: Starting D-Bus User Message Bus Socket...
Jan 21 13:54:22 compute-0 systemd[145276]: Starting Create User's Volatile Files and Directories...
Jan 21 13:54:22 compute-0 systemd[145276]: Listening on D-Bus User Message Bus Socket.
Jan 21 13:54:22 compute-0 systemd[145276]: Reached target Sockets.
Jan 21 13:54:22 compute-0 systemd[145276]: Finished Create User's Volatile Files and Directories.
Jan 21 13:54:22 compute-0 systemd[145276]: Reached target Basic System.
Jan 21 13:54:22 compute-0 systemd[145276]: Reached target Main User Target.
Jan 21 13:54:22 compute-0 systemd[145276]: Startup finished in 163ms.
Jan 21 13:54:22 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 21 13:54:22 compute-0 systemd[1]: Started ovn_controller container.
Jan 21 13:54:22 compute-0 systemd[1]: Started Session c1 of User root.
Jan 21 13:54:22 compute-0 sudo[145179]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:22 compute-0 ovn_controller[145237]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 13:54:22 compute-0 ovn_controller[145237]: INFO:__main__:Validating config file
Jan 21 13:54:22 compute-0 ovn_controller[145237]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 13:54:22 compute-0 ovn_controller[145237]: INFO:__main__:Writing out command to execute
Jan 21 13:54:22 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: ++ cat /run_command
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + ARGS=
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + sudo kolla_copy_cacerts
Jan 21 13:54:22 compute-0 systemd[1]: Started Session c2 of User root.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + [[ ! -n '' ]]
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + . kolla_extend_start
Jan 21 13:54:22 compute-0 ovn_controller[145237]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + umask 0022
Jan 21 13:54:22 compute-0 ovn_controller[145237]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 21 13:54:22 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4231] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4243] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <warn>  [1769003662.4247] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4256] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4263] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4268] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 21 13:54:22 compute-0 kernel: br-int: entered promiscuous mode
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 13:54:22 compute-0 systemd-udevd[144969]: Network interface NamePolicy= disabled on kernel command line.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 13:54:22 compute-0 ovn_controller[145237]: 2026-01-21T13:54:22Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4500] manager: (ovn-2384ff-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 21 13:54:22 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4647] device (genev_sys_6081): carrier: link connected
Jan 21 13:54:22 compute-0 NetworkManager[48860]: <info>  [1769003662.4654] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 21 13:54:23 compute-0 python3.9[145497]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 13:54:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:23 compute-0 sudo[145669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeunzdiapwhfzilcuyxszdrewbluwnsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003663.6576583-619-126984359837032/AnsiballZ_stat.py'
Jan 21 13:54:23 compute-0 sudo[145669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:24 compute-0 ceph-mon[75031]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:25 compute-0 ceph-mon[75031]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:25 compute-0 python3.9[145671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:25 compute-0 sudo[145669]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:25 compute-0 sudo[145793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikqwarajrvhfotcqipwozxamqgfrbqla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003663.6576583-619-126984359837032/AnsiballZ_copy.py'
Jan 21 13:54:25 compute-0 sudo[145793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:26 compute-0 python3.9[145795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003663.6576583-619-126984359837032/.source.yaml _original_basename=.1ywj3tdd follow=False checksum=89182053cc7d7956eb47291ba854186cb3c9f799 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:26 compute-0 sudo[145793]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:26 compute-0 sudo[145945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtdjbjyrjkmggxutgtejqgpbduwumpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003666.2877827-634-150177653484790/AnsiballZ_command.py'
Jan 21 13:54:26 compute-0 sudo[145945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:26 compute-0 python3.9[145947]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:54:26 compute-0 ovs-vsctl[145948]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 21 13:54:26 compute-0 sudo[145945]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:27 compute-0 sudo[146098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwtsdgqlmbcyrcmkiuevzxmpvwhpdhwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003666.9949877-642-37978334031958/AnsiballZ_command.py'
Jan 21 13:54:27 compute-0 sudo[146098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:27 compute-0 python3.9[146100]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:54:27 compute-0 ovs-vsctl[146102]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 21 13:54:27 compute-0 sudo[146098]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:27 compute-0 ceph-mon[75031]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:28 compute-0 sudo[146253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkdxdjxyrthynbvswbvbltrnwspkpjif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003667.9599967-656-239557263171880/AnsiballZ_command.py'
Jan 21 13:54:28 compute-0 sudo[146253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:28 compute-0 python3.9[146255]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:54:28 compute-0 ovs-vsctl[146256]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 21 13:54:28 compute-0 sudo[146253]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:28 compute-0 sshd-session[134053]: Connection closed by 192.168.122.30 port 59590
Jan 21 13:54:28 compute-0 sshd-session[134050]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:54:28 compute-0 systemd-logind[780]: Session 46 logged out. Waiting for processes to exit.
Jan 21 13:54:28 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 21 13:54:28 compute-0 systemd[1]: session-46.scope: Consumed 1min 423ms CPU time.
Jan 21 13:54:28 compute-0 systemd-logind[780]: Removed session 46.
Jan 21 13:54:29 compute-0 ceph-mon[75031]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:31 compute-0 ceph-mon[75031]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:32 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 21 13:54:32 compute-0 systemd[145276]: Activating special unit Exit the Session...
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped target Main User Target.
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped target Basic System.
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped target Paths.
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped target Sockets.
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped target Timers.
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 21 13:54:32 compute-0 systemd[145276]: Closed D-Bus User Message Bus Socket.
Jan 21 13:54:32 compute-0 systemd[145276]: Stopped Create User's Volatile Files and Directories.
Jan 21 13:54:32 compute-0 systemd[145276]: Removed slice User Application Slice.
Jan 21 13:54:32 compute-0 systemd[145276]: Reached target Shutdown.
Jan 21 13:54:32 compute-0 systemd[145276]: Finished Exit the Session.
Jan 21 13:54:32 compute-0 systemd[145276]: Reached target Exit the Session.
Jan 21 13:54:32 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 21 13:54:32 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 21 13:54:32 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 21 13:54:32 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 21 13:54:32 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 21 13:54:32 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 21 13:54:32 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 21 13:54:33 compute-0 ceph-mon[75031]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:33 compute-0 sshd-session[146282]: Accepted publickey for zuul from 192.168.122.30 port 60276 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:54:33 compute-0 systemd-logind[780]: New session 48 of user zuul.
Jan 21 13:54:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:33 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 21 13:54:33 compute-0 sshd-session[146282]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:54:34 compute-0 python3.9[146435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:54:35 compute-0 ceph-mon[75031]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:35 compute-0 sudo[146589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utarfeffbqfxfoseujpwrcvamjytyzul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003675.4561262-29-228955655566496/AnsiballZ_file.py'
Jan 21 13:54:35 compute-0 sudo[146589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:36 compute-0 python3.9[146591]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:36 compute-0 sudo[146589]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:36 compute-0 sudo[146741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bukdxbsiclptltnxyudbiqjberfnvknc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003676.3952966-29-59966183542518/AnsiballZ_file.py'
Jan 21 13:54:36 compute-0 sudo[146741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:36 compute-0 python3.9[146743]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:36 compute-0 sudo[146741]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:37 compute-0 ceph-mon[75031]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:37 compute-0 sudo[146893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwdobekhmijbjmhxucpwbcjztcqvagou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003677.0214581-29-142383179388041/AnsiballZ_file.py'
Jan 21 13:54:37 compute-0 sudo[146893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:37 compute-0 python3.9[146895]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:37 compute-0 sudo[146893]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:37 compute-0 sudo[147045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuwzfsatflttjnkintozjwmkxzqigyvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003677.689003-29-186164763112238/AnsiballZ_file.py'
Jan 21 13:54:37 compute-0 sudo[147045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:38 compute-0 python3.9[147047]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:38 compute-0 sudo[147045]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:38 compute-0 sudo[147197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mutybdwnbsylhrgrgrvdtgmwxnetlnbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003678.318002-29-238624020182215/AnsiballZ_file.py'
Jan 21 13:54:38 compute-0 sudo[147197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:38 compute-0 python3.9[147199]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:38 compute-0 sudo[147197]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:39 compute-0 ceph-mon[75031]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:54:39
Jan 21 13:54:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:54:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:54:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control']
Jan 21 13:54:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:54:39 compute-0 python3.9[147349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:54:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:40 compute-0 sudo[147499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqwctmodudulijbzdzhqyohgbbrttvjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003679.9565837-73-177061487319023/AnsiballZ_seboolean.py'
Jan 21 13:54:40 compute-0 sudo[147499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:40 compute-0 python3.9[147501]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 21 13:54:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:54:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:54:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:54:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:54:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:54:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:54:41 compute-0 sudo[147499]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:41 compute-0 ceph-mon[75031]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:42 compute-0 python3.9[147652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:42 compute-0 python3.9[147773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003681.4734051-81-55308414562917/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:43 compute-0 ceph-mon[75031]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:43 compute-0 python3.9[147923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:44 compute-0 python3.9[148044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003683.0481741-96-23262624264541/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:44 compute-0 sudo[148194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmncmnfijjzhhguwyspjhehtkryyfitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003684.4118814-113-122742726485595/AnsiballZ_setup.py'
Jan 21 13:54:44 compute-0 sudo[148194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:45 compute-0 python3.9[148196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:54:45 compute-0 sudo[148194]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:45 compute-0 ceph-mon[75031]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:45 compute-0 sudo[148278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ralpysundhhtmlyeawehonyvzmvzpwhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003684.4118814-113-122742726485595/AnsiballZ_dnf.py'
Jan 21 13:54:45 compute-0 sudo[148278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:45 compute-0 python3.9[148280]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:54:47 compute-0 sudo[148278]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:47 compute-0 ceph-mon[75031]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:48 compute-0 sudo[148431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aicaehjdolvqkmnqzwlnsvhgbvsmalzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003687.47026-125-107109245949393/AnsiballZ_systemd.py'
Jan 21 13:54:48 compute-0 sudo[148431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:48 compute-0 python3.9[148433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:54:48 compute-0 sudo[148431]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:48 compute-0 ceph-mon[75031]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:49 compute-0 python3.9[148586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:49 compute-0 python3.9[148707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003688.5937135-133-152238801728705/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:50 compute-0 python3.9[148857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:54:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:54:50 compute-0 python3.9[148978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003689.7898202-133-252259948199097/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:50 compute-0 ceph-mon[75031]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:52 compute-0 python3.9[149128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:52 compute-0 ovn_controller[145237]: 2026-01-21T13:54:52Z|00025|memory|INFO|16128 kB peak resident set size after 29.9 seconds
Jan 21 13:54:52 compute-0 ovn_controller[145237]: 2026-01-21T13:54:52Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 21 13:54:52 compute-0 podman[149161]: 2026-01-21 13:54:52.367863136 +0000 UTC m=+0.088379004 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 21 13:54:52 compute-0 python3.9[149273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003691.634257-177-13539636545492/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:52 compute-0 ceph-mon[75031]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:53 compute-0 python3.9[149423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:53 compute-0 python3.9[149544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003692.764324-177-33026031433280/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:54 compute-0 python3.9[149694]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:54:54 compute-0 sudo[149846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvadhjhdzqakwacrfdsubpwanxvhybpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003694.4997616-215-47547488820196/AnsiballZ_file.py'
Jan 21 13:54:54 compute-0 sudo[149846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:55 compute-0 python3.9[149848]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:55 compute-0 sudo[149846]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:55 compute-0 ceph-mon[75031]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:55 compute-0 sudo[149998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddqkghqmlohxibtgjcqbpjmxbzuekdtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003695.2298257-223-166231501370718/AnsiballZ_stat.py'
Jan 21 13:54:55 compute-0 sudo[149998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:55 compute-0 python3.9[150000]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:55 compute-0 sudo[149998]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:55 compute-0 sudo[150076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjkuraxkfnkocwdjrekwudfgephfbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003695.2298257-223-166231501370718/AnsiballZ_file.py'
Jan 21 13:54:56 compute-0 sudo[150076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:56 compute-0 python3.9[150078]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:56 compute-0 sudo[150076]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:56 compute-0 sudo[150228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cioflaroshvpfjvvkoauoxgdnpieakhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003696.4127703-223-172966781235725/AnsiballZ_stat.py'
Jan 21 13:54:56 compute-0 sudo[150228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:56 compute-0 python3.9[150230]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:56 compute-0 sudo[150228]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:57 compute-0 sudo[150306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwyvrsuwjodjbnqwejmmoqrauimmwmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003696.4127703-223-172966781235725/AnsiballZ_file.py'
Jan 21 13:54:57 compute-0 sudo[150306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:57 compute-0 ceph-mon[75031]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:57 compute-0 python3.9[150308]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:54:57 compute-0 sudo[150306]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:58 compute-0 sudo[150458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanstriylqeonghebxdvitqmjldczjks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003697.7330012-246-33146184596167/AnsiballZ_file.py'
Jan 21 13:54:58 compute-0 sudo[150458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:58 compute-0 python3.9[150460]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:58 compute-0 sudo[150458]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:54:58 compute-0 sudo[150610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnulprnuqeykmheyonynqmkbjzhglmpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003698.3736653-254-96257571172644/AnsiballZ_stat.py'
Jan 21 13:54:58 compute-0 sudo[150610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:58 compute-0 python3.9[150612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:58 compute-0 sudo[150610]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:59 compute-0 sudo[150688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsvbbrkerrhybccrwgelkhyctxshdngx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003698.3736653-254-96257571172644/AnsiballZ_file.py'
Jan 21 13:54:59 compute-0 sudo[150688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:59 compute-0 python3.9[150690]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:54:59 compute-0 sudo[150688]: pam_unix(sudo:session): session closed for user root
Jan 21 13:54:59 compute-0 ceph-mon[75031]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:59 compute-0 sudo[150840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hydhahoutavumhmqoztbtbotdlfspuyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003699.4658258-266-93326983198750/AnsiballZ_stat.py'
Jan 21 13:54:59 compute-0 sudo[150840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:54:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:54:59 compute-0 python3.9[150842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:54:59 compute-0 sudo[150840]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:00 compute-0 sudo[150918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqdjlafqznmwkzmeqewmtgqvwggsxqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003699.4658258-266-93326983198750/AnsiballZ_file.py'
Jan 21 13:55:00 compute-0 sudo[150918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:00 compute-0 python3.9[150920]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:00 compute-0 sudo[150918]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:00 compute-0 sudo[151070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kanbsujcfnpzbzihsncftjcczputexhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003700.6040072-278-242545459107979/AnsiballZ_systemd.py'
Jan 21 13:55:00 compute-0 sudo[151070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:01 compute-0 python3.9[151072]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:01 compute-0 systemd[1]: Reloading.
Jan 21 13:55:01 compute-0 systemd-rc-local-generator[151096]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:55:01 compute-0 systemd-sysv-generator[151099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:55:01 compute-0 ceph-mon[75031]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:01 compute-0 sudo[151070]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:02 compute-0 sudo[151260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdetgwmzkehomhvbterejblzotqrvkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003701.839871-286-73761716815263/AnsiballZ_stat.py'
Jan 21 13:55:02 compute-0 sudo[151260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:02 compute-0 python3.9[151262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:55:02 compute-0 sudo[151260]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:02 compute-0 sudo[151338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukoziforozxxmlmyejztbmwqtiqwbvtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003701.839871-286-73761716815263/AnsiballZ_file.py'
Jan 21 13:55:02 compute-0 sudo[151338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:02 compute-0 python3.9[151340]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:02 compute-0 sudo[151338]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:03 compute-0 sudo[151490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzfrxepkvrcchlxsmwoutkjvlcpuhcij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003703.0194123-298-174248976181302/AnsiballZ_stat.py'
Jan 21 13:55:03 compute-0 sudo[151490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:03 compute-0 ceph-mon[75031]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:03 compute-0 python3.9[151492]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:55:03 compute-0 sudo[151490]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:03 compute-0 sudo[151568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-topayelzdnvsrqrlpjozdpbuaycvhqkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003703.0194123-298-174248976181302/AnsiballZ_file.py'
Jan 21 13:55:03 compute-0 sudo[151568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:03 compute-0 python3.9[151570]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:03 compute-0 sudo[151568]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:04 compute-0 sudo[151720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhwvtaglznvnmzuakbqwahpwzsmawjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003704.1120844-310-277590963957147/AnsiballZ_systemd.py'
Jan 21 13:55:04 compute-0 sudo[151720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:04 compute-0 python3.9[151722]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:04 compute-0 systemd[1]: Reloading.
Jan 21 13:55:04 compute-0 systemd-sysv-generator[151755]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:55:04 compute-0 systemd-rc-local-generator[151751]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:55:05 compute-0 systemd[1]: Starting Create netns directory...
Jan 21 13:55:05 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 21 13:55:05 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 21 13:55:05 compute-0 systemd[1]: Finished Create netns directory.
Jan 21 13:55:05 compute-0 sudo[151720]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:05 compute-0 ceph-mon[75031]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:05 compute-0 sudo[151915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jphlqvgsjdlhjuglqwenuuvqgvxkkndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003705.3249972-320-63677256176811/AnsiballZ_file.py'
Jan 21 13:55:05 compute-0 sudo[151915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:05 compute-0 python3.9[151917]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:55:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:05 compute-0 sudo[151915]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:06 compute-0 sudo[152067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayoeijujjokydvejdrvdgbhckcsfiqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003705.9923124-328-39536336304488/AnsiballZ_stat.py'
Jan 21 13:55:06 compute-0 sudo[152067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:06 compute-0 python3.9[152069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:55:06 compute-0 sudo[152067]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:06 compute-0 sudo[152190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvacxtwegnqxukceqjocugzlwdvberdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003705.9923124-328-39536336304488/AnsiballZ_copy.py'
Jan 21 13:55:06 compute-0 sudo[152190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:06 compute-0 python3.9[152192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769003705.9923124-328-39536336304488/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:55:07 compute-0 sudo[152190]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:07 compute-0 ceph-mon[75031]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:07 compute-0 sudo[152342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcjuvypnnpvakhdwzsedsyyavwgkdzwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003707.3055449-345-220459192427602/AnsiballZ_file.py'
Jan 21 13:55:07 compute-0 sudo[152342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:07 compute-0 python3.9[152344]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:07 compute-0 sudo[152342]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:08 compute-0 sudo[152494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwmrwmpxdlrmsjapuzcrpzloqdjrcekc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003707.972689-353-280212190699262/AnsiballZ_file.py'
Jan 21 13:55:08 compute-0 sudo[152494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:08 compute-0 python3.9[152496]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:55:08 compute-0 sudo[152494]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:08 compute-0 sudo[152646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrvfwypevpxikoviwbpyenszkjwxtad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003708.6451545-361-9907093984083/AnsiballZ_stat.py'
Jan 21 13:55:08 compute-0 sudo[152646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:09 compute-0 python3.9[152648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:55:09 compute-0 sudo[152646]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:09 compute-0 ceph-mon[75031]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:09 compute-0 sudo[152769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzefwcaehlspsaglvsvduybpxsmeqjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003708.6451545-361-9907093984083/AnsiballZ_copy.py'
Jan 21 13:55:09 compute-0 sudo[152769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:09 compute-0 python3.9[152771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003708.6451545-361-9907093984083/.source.json _original_basename=.qhm5h_2f follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:09 compute-0 sudo[152769]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:10 compute-0 python3.9[152921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:55:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:55:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:55:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:55:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:55:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:55:11 compute-0 ceph-mon[75031]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:12 compute-0 sudo[153342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwdrizojfqsyqsvutbchrugadhmemft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003711.8002326-401-224828840612646/AnsiballZ_container_config_data.py'
Jan 21 13:55:12 compute-0 sudo[153342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:12 compute-0 python3.9[153344]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 21 13:55:12 compute-0 sudo[153342]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:13 compute-0 sudo[153494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjdeaaepsnavltdifeqkaxfchcdoijdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003712.6914198-412-162076481112660/AnsiballZ_container_config_hash.py'
Jan 21 13:55:13 compute-0 sudo[153494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:13 compute-0 python3.9[153496]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 13:55:13 compute-0 sudo[153494]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:13 compute-0 ceph-mon[75031]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:14 compute-0 sudo[153646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtkhtqjltwemcpophseonwszzhmwnkuj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769003713.6337657-422-147025661711565/AnsiballZ_edpm_container_manage.py'
Jan 21 13:55:14 compute-0 sudo[153646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:14 compute-0 python3[153648]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 13:55:14 compute-0 ceph-mon[75031]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:16 compute-0 ceph-mon[75031]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:19 compute-0 ceph-mon[75031]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:20 compute-0 sudo[153726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:55:20 compute-0 sudo[153726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:20 compute-0 sudo[153726]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:20 compute-0 sudo[153751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 21 13:55:20 compute-0 sudo[153751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:21 compute-0 ceph-mon[75031]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:24 compute-0 ceph-mon[75031]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:26 compute-0 ceph-mon[75031]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:26 compute-0 sudo[153751]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:26 compute-0 sudo[153859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:55:26 compute-0 sudo[153859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:26 compute-0 sudo[153859]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:26 compute-0 sudo[153884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:55:26 compute-0 sudo[153884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:26 compute-0 podman[153661]: 2026-01-21 13:55:26.376307945 +0000 UTC m=+11.879529899 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 13:55:26 compute-0 podman[153823]: 2026-01-21 13:55:26.393884119 +0000 UTC m=+3.088000685 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 13:55:26 compute-0 podman[153948]: 2026-01-21 13:55:26.568919628 +0000 UTC m=+0.076570211 container create 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 21 13:55:26 compute-0 podman[153948]: 2026-01-21 13:55:26.533874113 +0000 UTC m=+0.041524756 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 13:55:26 compute-0 python3[153648]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 21 13:55:26 compute-0 sudo[153646]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:26 compute-0 sudo[153884]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:55:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:55:26 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:55:27 compute-0 sudo[154114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:55:27 compute-0 sudo[154114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:27 compute-0 sudo[154114]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:27 compute-0 sudo[154165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:55:27 compute-0 sudo[154165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:27 compute-0 sudo[154217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudnkcdumhuzflviabsjumrkkzrpoevr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003726.852659-430-266560242742669/AnsiballZ_stat.py'
Jan 21 13:55:27 compute-0 sudo[154217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:27 compute-0 ceph-mon[75031]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:55:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:55:27 compute-0 python3.9[154219]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:55:27 compute-0 sudo[154217]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.466390202 +0000 UTC m=+0.066633415 container create 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:55:27 compute-0 systemd[1]: Started libpod-conmon-2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e.scope.
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.42291216 +0000 UTC m=+0.023155393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:55:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.557938521 +0000 UTC m=+0.158181814 container init 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.570094821 +0000 UTC m=+0.170338044 container start 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.574545571 +0000 UTC m=+0.174788804 container attach 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:55:27 compute-0 keen_dubinsky[154274]: 167 167
Jan 21 13:55:27 compute-0 systemd[1]: libpod-2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e.scope: Deactivated successfully.
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.582037075 +0000 UTC m=+0.182280318 container died 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a38e22bd6a4dd9a5250cad341bb180f7b30e406ea6f9264211e5af7d5e6c622b-merged.mount: Deactivated successfully.
Jan 21 13:55:27 compute-0 podman[154234]: 2026-01-21 13:55:27.635198137 +0000 UTC m=+0.235441340 container remove 2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:55:27 compute-0 systemd[1]: libpod-conmon-2be8608649789b27dabd91c21d446aeb63feb90a96c433f6a3ac44c887b2f80e.scope: Deactivated successfully.
Jan 21 13:55:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:55:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5620 writes, 24K keys, 5620 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5620 writes, 886 syncs, 6.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5620 writes, 24K keys, 5620 commit groups, 1.0 writes per commit group, ingest: 18.77 MB, 0.03 MB/s
                                           Interval WAL: 5620 writes, 886 syncs, 6.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 13:55:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:27 compute-0 podman[154371]: 2026-01-21 13:55:27.825992785 +0000 UTC m=+0.063361075 container create 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 13:55:27 compute-0 systemd[1]: Started libpod-conmon-25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969.scope.
Jan 21 13:55:27 compute-0 podman[154371]: 2026-01-21 13:55:27.800772552 +0000 UTC m=+0.038140862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:55:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:27 compute-0 sudo[154441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieqqssewhhzlggifrvjqttnyfoihwpro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003727.6169846-439-256799173693829/AnsiballZ_file.py'
Jan 21 13:55:27 compute-0 podman[154371]: 2026-01-21 13:55:27.91738983 +0000 UTC m=+0.154758140 container init 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:55:27 compute-0 sudo[154441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:27 compute-0 podman[154371]: 2026-01-21 13:55:27.928967146 +0000 UTC m=+0.166335426 container start 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 13:55:27 compute-0 podman[154371]: 2026-01-21 13:55:27.932624246 +0000 UTC m=+0.169992526 container attach 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 13:55:28 compute-0 python3.9[154444]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:28 compute-0 sudo[154441]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:28 compute-0 sudo[154531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcvcpcxwyddbekcbjzhthmskrfhbiqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003727.6169846-439-256799173693829/AnsiballZ_stat.py'
Jan 21 13:55:28 compute-0 sudo[154531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:28 compute-0 xenodochial_babbage[154429]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:55:28 compute-0 xenodochial_babbage[154429]: --> All data devices are unavailable
Jan 21 13:55:28 compute-0 systemd[1]: libpod-25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969.scope: Deactivated successfully.
Jan 21 13:55:28 compute-0 podman[154371]: 2026-01-21 13:55:28.435043862 +0000 UTC m=+0.672412162 container died 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 13:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b158b96a673d94c0295e180e48f351e92aefa073b7af779c0f25aa07caa39f-merged.mount: Deactivated successfully.
Jan 21 13:55:28 compute-0 podman[154371]: 2026-01-21 13:55:28.485348074 +0000 UTC m=+0.722716354 container remove 25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:55:28 compute-0 systemd[1]: libpod-conmon-25070d0751416ac8e218aaa6c979f83182255cfea570ed60b36bbbff237e1969.scope: Deactivated successfully.
Jan 21 13:55:28 compute-0 sudo[154165]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:28 compute-0 python3.9[154533]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 13:55:28 compute-0 sudo[154531]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:28 compute-0 sudo[154551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:55:28 compute-0 sudo[154551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:28 compute-0 sudo[154551]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:28 compute-0 sudo[154584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:55:28 compute-0 sudo[154584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:28 compute-0 podman[154688]: 2026-01-21 13:55:28.926697834 +0000 UTC m=+0.048342104 container create 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:55:28 compute-0 systemd[1]: Started libpod-conmon-5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb.scope.
Jan 21 13:55:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:28 compute-0 podman[154688]: 2026-01-21 13:55:28.902911047 +0000 UTC m=+0.024555317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:55:29 compute-0 podman[154688]: 2026-01-21 13:55:29.011465205 +0000 UTC m=+0.133109475 container init 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 13:55:29 compute-0 podman[154688]: 2026-01-21 13:55:29.020198291 +0000 UTC m=+0.141842531 container start 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 13:55:29 compute-0 podman[154688]: 2026-01-21 13:55:29.025873051 +0000 UTC m=+0.147517301 container attach 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 13:55:29 compute-0 busy_shaw[154740]: 167 167
Jan 21 13:55:29 compute-0 systemd[1]: libpod-5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb.scope: Deactivated successfully.
Jan 21 13:55:29 compute-0 podman[154688]: 2026-01-21 13:55:29.028964598 +0000 UTC m=+0.150608838 container died 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-168920f88ee9e3d4028a0b400d40ca882f98b8e8491e232038df3460b3f4e05c-merged.mount: Deactivated successfully.
Jan 21 13:55:29 compute-0 podman[154688]: 2026-01-21 13:55:29.068710938 +0000 UTC m=+0.190355168 container remove 5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shaw, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:55:29 compute-0 sudo[154790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbmngryjrvpkmnkpgpmbxzawhfkjxhly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003728.640321-439-5698257486068/AnsiballZ_copy.py'
Jan 21 13:55:29 compute-0 sudo[154790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:29 compute-0 systemd[1]: libpod-conmon-5b8002017fa90d7319768d908ca835e514f1e169d399d953f16081f7f51440eb.scope: Deactivated successfully.
Jan 21 13:55:29 compute-0 ceph-mon[75031]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:29 compute-0 podman[154805]: 2026-01-21 13:55:29.258623234 +0000 UTC m=+0.064187355 container create 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 13:55:29 compute-0 python3.9[154797]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769003728.640321-439-5698257486068/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:29 compute-0 sudo[154790]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:29 compute-0 systemd[1]: Started libpod-conmon-4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1.scope.
Jan 21 13:55:29 compute-0 podman[154805]: 2026-01-21 13:55:29.223410296 +0000 UTC m=+0.028974467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:55:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:29 compute-0 podman[154805]: 2026-01-21 13:55:29.348802579 +0000 UTC m=+0.154366710 container init 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:55:29 compute-0 podman[154805]: 2026-01-21 13:55:29.35979572 +0000 UTC m=+0.165359831 container start 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:55:29 compute-0 podman[154805]: 2026-01-21 13:55:29.364364203 +0000 UTC m=+0.169928334 container attach 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:55:29 compute-0 sudo[154900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-camaqlhqswqatuaaylpvaqziwnwvkkqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003728.640321-439-5698257486068/AnsiballZ_systemd.py'
Jan 21 13:55:29 compute-0 sudo[154900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:29 compute-0 epic_lumiere[154822]: {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:     "0": [
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:         {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "devices": [
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "/dev/loop3"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             ],
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_name": "ceph_lv0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_size": "21470642176",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "name": "ceph_lv0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "tags": {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cluster_name": "ceph",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.crush_device_class": "",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.encrypted": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.objectstore": "bluestore",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osd_id": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.type": "block",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.vdo": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.with_tpm": "0"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             },
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "type": "block",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "vg_name": "ceph_vg0"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:         }
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:     ],
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:     "1": [
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:         {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "devices": [
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "/dev/loop4"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             ],
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_name": "ceph_lv1",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_size": "21470642176",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "name": "ceph_lv1",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "tags": {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cluster_name": "ceph",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.crush_device_class": "",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.encrypted": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.objectstore": "bluestore",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osd_id": "1",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.type": "block",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.vdo": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.with_tpm": "0"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             },
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "type": "block",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "vg_name": "ceph_vg1"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:         }
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:     ],
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:     "2": [
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:         {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "devices": [
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "/dev/loop5"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             ],
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_name": "ceph_lv2",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_size": "21470642176",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "name": "ceph_lv2",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "tags": {
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.cluster_name": "ceph",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.crush_device_class": "",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.encrypted": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.objectstore": "bluestore",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osd_id": "2",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.type": "block",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.vdo": "0",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:                 "ceph.with_tpm": "0"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             },
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "type": "block",
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:             "vg_name": "ceph_vg2"
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:         }
Jan 21 13:55:29 compute-0 epic_lumiere[154822]:     ]
Jan 21 13:55:29 compute-0 epic_lumiere[154822]: }
Jan 21 13:55:29 compute-0 systemd[1]: libpod-4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1.scope: Deactivated successfully.
Jan 21 13:55:29 compute-0 podman[154805]: 2026-01-21 13:55:29.673170103 +0000 UTC m=+0.478734214 container died 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 13:55:29 compute-0 python3.9[154902]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 13:55:29 compute-0 systemd[1]: Reloading.
Jan 21 13:55:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:29 compute-0 systemd-sysv-generator[154949]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:55:29 compute-0 systemd-rc-local-generator[154946]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:55:30 compute-0 sudo[154900]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:30 compute-0 sudo[155031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcggjurnyfbznawmvhipwptkkupaigzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003728.640321-439-5698257486068/AnsiballZ_systemd.py'
Jan 21 13:55:30 compute-0 sudo[155031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c986bcc9a5ec58da271cc75cd4d50180a52b4352537afcee4c7eb63f5287e9-merged.mount: Deactivated successfully.
Jan 21 13:55:30 compute-0 podman[154805]: 2026-01-21 13:55:30.521760831 +0000 UTC m=+1.327324942 container remove 4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 13:55:30 compute-0 systemd[1]: libpod-conmon-4f67f2d91f744f31a46fe9df4adc30f204f82e868feb36f7388f404103d997c1.scope: Deactivated successfully.
Jan 21 13:55:30 compute-0 sudo[154584]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:30 compute-0 sudo[155034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:55:30 compute-0 sudo[155034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:30 compute-0 sudo[155034]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:30 compute-0 sudo[155059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:55:30 compute-0 sudo[155059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:30 compute-0 python3.9[155033]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:30 compute-0 systemd[1]: Reloading.
Jan 21 13:55:30 compute-0 systemd-rc-local-generator[155108]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:55:30 compute-0 systemd-sysv-generator[155113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:30.991989514 +0000 UTC m=+0.024359593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:55:31 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:31.247305863 +0000 UTC m=+0.279675892 container create 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 13:55:31 compute-0 systemd[1]: Started libpod-conmon-89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066.scope.
Jan 21 13:55:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35671b1c7d92c7660872932435139198769fd63a07748331c62a0e8a0175a667/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35671b1c7d92c7660872932435139198769fd63a07748331c62a0e8a0175a667/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:31 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:31 compute-0 ceph-mon[75031]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2.
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:31.642830363 +0000 UTC m=+0.675200482 container init 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:31.655373351 +0000 UTC m=+0.687743420 container start 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 13:55:31 compute-0 podman[155152]: 2026-01-21 13:55:31.655863904 +0000 UTC m=+0.516011044 container init 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:31.661230427 +0000 UTC m=+0.693600456 container attach 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:55:31 compute-0 youthful_shtern[155172]: 167 167
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:31.664758193 +0000 UTC m=+0.697128212 container died 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 13:55:31 compute-0 systemd[1]: libpod-89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066.scope: Deactivated successfully.
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + sudo -E kolla_set_configs
Jan 21 13:55:31 compute-0 podman[155152]: 2026-01-21 13:55:31.688674753 +0000 UTC m=+0.548821873 container start 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 13:55:31 compute-0 edpm-start-podman-container[155152]: ovn_metadata_agent
Jan 21 13:55:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-81e520e0451c41c1a198e3c64edff492c2ae559f679686d54018d14d40a92af7-merged.mount: Deactivated successfully.
Jan 21 13:55:31 compute-0 podman[155134]: 2026-01-21 13:55:31.85226334 +0000 UTC m=+0.884633359 container remove 89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_shtern, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 13:55:31 compute-0 systemd[1]: libpod-conmon-89c4b2b28e1fc74b67263493cf871d5ca2b5b0c16f79f71ed92a2ab7eb532066.scope: Deactivated successfully.
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Validating config file
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Copying service configuration files
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Writing out command to execute
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: ++ cat /run_command
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + CMD=neutron-ovn-metadata-agent
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + ARGS=
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + sudo kolla_copy_cacerts
Jan 21 13:55:31 compute-0 edpm-start-podman-container[155151]: Creating additional drop-in dependency for "ovn_metadata_agent" (9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2)
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + [[ ! -n '' ]]
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + . kolla_extend_start
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + umask 0022
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: + exec neutron-ovn-metadata-agent
Jan 21 13:55:31 compute-0 ovn_metadata_agent[155169]: Running command: 'neutron-ovn-metadata-agent'
Jan 21 13:55:31 compute-0 systemd[1]: Reloading.
Jan 21 13:55:31 compute-0 podman[155182]: 2026-01-21 13:55:31.92927506 +0000 UTC m=+0.225023214 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 13:55:32 compute-0 systemd-rc-local-generator[155267]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:55:32 compute-0 systemd-sysv-generator[155273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:55:32 compute-0 podman[155262]: 2026-01-21 13:55:32.040138405 +0000 UTC m=+0.025984301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:55:32 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 21 13:55:32 compute-0 sudo[155031]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:55:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s
                                           Interval WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 13:55:32 compute-0 podman[155262]: 2026-01-21 13:55:32.888194091 +0000 UTC m=+0.874039997 container create dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:55:33 compute-0 python3.9[155443]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 21 13:55:33 compute-0 ceph-mon[75031]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:33 compute-0 systemd[1]: Started libpod-conmon-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope.
Jan 21 13:55:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:55:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:33 compute-0 podman[155262]: 2026-01-21 13:55:33.707400814 +0000 UTC m=+1.693246700 container init dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 21 13:55:33 compute-0 podman[155262]: 2026-01-21 13:55:33.715631117 +0000 UTC m=+1.701476993 container start dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 13:55:33 compute-0 podman[155262]: 2026-01-21 13:55:33.794359729 +0000 UTC m=+1.780205615 container attach dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 13:55:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.835 155179 INFO neutron.common.config [-] Logging enabled!
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.835 155179 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.836 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.837 155179 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.838 155179 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.839 155179 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.840 155179 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.841 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.842 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.843 155179 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.844 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.845 155179 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.846 155179 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.847 155179 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.848 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 sudo[155600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzazxdnqusqloucdbctfvhhjrxvdsvtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003733.5649986-484-272982526493205/AnsiballZ_stat.py'
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.849 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.850 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.851 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 sudo[155600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.852 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.853 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.854 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.855 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.856 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.857 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.858 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.859 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.860 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.861 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.862 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.863 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.864 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.865 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.866 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.867 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.868 155179 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.869 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.870 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.871 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.872 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.873 155179 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.883 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.884 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.884 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.884 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.885 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.899 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 3ade990a-d6f9-4724-a58c-009e4fc34364 (UUID: 3ade990a-d6f9-4724-a58c-009e4fc34364) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.917 155179 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.918 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.918 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.918 155179 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.921 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.927 155179 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.933 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '3ade990a-d6f9-4724-a58c-009e4fc34364'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd43120a7c0>], external_ids={}, name=3ade990a-d6f9-4724-a58c-009e4fc34364, nb_cfg_timestamp=1769003670447, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.934 155179 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd43120ae50>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.935 155179 INFO oslo_service.service [-] Starting 1 workers
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.941 155179 DEBUG oslo_service.service [-] Started child 155613 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.945 155179 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpjyd3laql/privsep.sock']
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.945 155613 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-428749'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.966 155613 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.966 155613 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.967 155613 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.970 155613 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.976 155613 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 21 13:55:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:33.982 155613 INFO eventlet.wsgi.server [-] (155613) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 21 13:55:34 compute-0 python3.9[155602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:55:34 compute-0 sudo[155600]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:34 compute-0 sudo[155799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozndjcizftvgtfbodybljiuowcbvmdkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003733.5649986-484-272982526493205/AnsiballZ_copy.py'
Jan 21 13:55:34 compute-0 sudo[155799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:34 compute-0 lvm[155807]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:55:34 compute-0 lvm[155807]: VG ceph_vg1 finished
Jan 21 13:55:34 compute-0 lvm[155806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:55:34 compute-0 lvm[155806]: VG ceph_vg0 finished
Jan 21 13:55:34 compute-0 lvm[155809]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:55:34 compute-0 lvm[155809]: VG ceph_vg2 finished
Jan 21 13:55:34 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 21 13:55:34 compute-0 suspicious_ganguly[155493]: {}
Jan 21 13:55:34 compute-0 systemd[1]: libpod-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope: Deactivated successfully.
Jan 21 13:55:34 compute-0 systemd[1]: libpod-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope: Consumed 1.420s CPU time.
Jan 21 13:55:34 compute-0 podman[155262]: 2026-01-21 13:55:34.593824416 +0000 UTC m=+2.579670322 container died dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:55:34 compute-0 python3.9[155802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003733.5649986-484-272982526493205/.source.yaml _original_basename=.5r64btgd follow=False checksum=10f7f895d301938e0fadf18a8ee2b485f6809c3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:34 compute-0 sudo[155799]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.676 155179 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.678 155179 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpjyd3laql/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.518 155811 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.522 155811 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.524 155811 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.524 155811 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155811
Jan 21 13:55:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:34.682 155811 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac9e61f-c4a9-4cfa-904e-fa55e1aa94a3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 21 13:55:35 compute-0 sshd-session[146285]: Connection closed by 192.168.122.30 port 60276
Jan 21 13:55:35 compute-0 sshd-session[146282]: pam_unix(sshd:session): session closed for user zuul
Jan 21 13:55:35 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 21 13:55:35 compute-0 systemd[1]: session-48.scope: Consumed 57.852s CPU time.
Jan 21 13:55:35 compute-0 systemd-logind[780]: Session 48 logged out. Waiting for processes to exit.
Jan 21 13:55:35 compute-0 systemd-logind[780]: Removed session 48.
Jan 21 13:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee878c2796fad942ae8b6187552fc5c634b95cfd35aca0aae20f0a58b4cd5ce6-merged.mount: Deactivated successfully.
Jan 21 13:55:35 compute-0 podman[155262]: 2026-01-21 13:55:35.104300962 +0000 UTC m=+3.090146868 container remove dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:55:35 compute-0 sudo[155059]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:55:35 compute-0 systemd[1]: libpod-conmon-dcd1371b68458db651eafd867146dab30b32358776208ca9376f803641f940af.scope: Deactivated successfully.
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.216 155811 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.216 155811 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.216 155811 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 13:55:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:55:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:35 compute-0 sudo[155853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:55:35 compute-0 sudo[155853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:55:35 compute-0 sudo[155853]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:35 compute-0 ceph-mon[75031]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.804 155811 DEBUG oslo.privsep.daemon [-] privsep: reply[225b7260-b053-4e30-8fe3-3d4c63bb79df]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.807 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, column=external_ids, values=({'neutron:ovn-metadata-id': '817f77ed-8014-5ed7-bdf9-4f7a33d6b36b'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 13:55:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.868 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.881 155179 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.881 155179 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.881 155179 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.882 155179 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.883 155179 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.884 155179 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.885 155179 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.886 155179 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.887 155179 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.888 155179 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.889 155179 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.890 155179 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.891 155179 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.892 155179 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.893 155179 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.895 155179 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.896 155179 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.897 155179 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.898 155179 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.899 155179 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.900 155179 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.901 155179 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.902 155179 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.903 155179 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.904 155179 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.905 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.906 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.907 155179 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.908 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.909 155179 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.910 155179 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.911 155179 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.912 155179 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.913 155179 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.914 155179 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.915 155179 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.916 155179 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.917 155179 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.918 155179 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.919 155179 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.920 155179 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.921 155179 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.922 155179 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.923 155179 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.924 155179 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.925 155179 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.926 155179 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.927 155179 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.928 155179 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.929 155179 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.930 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.931 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.932 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.933 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.934 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.935 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 13:55:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:55:35.936 155179 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 13:55:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 13:55:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s
                                           Interval WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 13:55:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:38 compute-0 ceph-mon[75031]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:55:39
Jan 21 13:55:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:55:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:55:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.control', 'backups']
Jan 21 13:55:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:55:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:39 compute-0 ceph-mon[75031]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 13:55:40 compute-0 sshd-session[155878]: Accepted publickey for zuul from 192.168.122.30 port 53628 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 13:55:40 compute-0 systemd-logind[780]: New session 49 of user zuul.
Jan 21 13:55:40 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 21 13:55:40 compute-0 sshd-session[155878]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 13:55:40 compute-0 ceph-mon[75031]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:55:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:55:41 compute-0 python3.9[156031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:55:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:42 compute-0 sudo[156185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfksmfzkjupylngawsrahwkkiwseuhoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003742.0701885-29-73645146162082/AnsiballZ_command.py'
Jan 21 13:55:42 compute-0 sudo[156185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:42 compute-0 python3.9[156187]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:55:42 compute-0 sudo[156185]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:42 compute-0 ceph-mon[75031]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:43 compute-0 sudo[156351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvxdkwcdsgpqtwhwjxlbhgypapwgphwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003743.3425698-40-163037912388376/AnsiballZ_systemd_service.py'
Jan 21 13:55:43 compute-0 sudo[156351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:44 compute-0 python3.9[156353]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 13:55:44 compute-0 systemd[1]: Reloading.
Jan 21 13:55:44 compute-0 systemd-rc-local-generator[156382]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:55:44 compute-0 systemd-sysv-generator[156386]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:55:44 compute-0 sudo[156351]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:44 compute-0 ceph-mon[75031]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:45 compute-0 python3.9[156539]: ansible-ansible.builtin.service_facts Invoked
Jan 21 13:55:45 compute-0 network[156556]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 13:55:45 compute-0 network[156557]: 'network-scripts' will be removed from distribution in near future.
Jan 21 13:55:45 compute-0 network[156558]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 13:55:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:47 compute-0 ceph-mon[75031]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:49 compute-0 ceph-mon[75031]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:50 compute-0 sudo[156818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhhdewpsutdwmcevalwbocczvkyoiyuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003749.8207178-59-229549165361395/AnsiballZ_systemd_service.py'
Jan 21 13:55:50 compute-0 sudo[156818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:55:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:55:50 compute-0 python3.9[156820]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:50 compute-0 sudo[156818]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:50 compute-0 ceph-mon[75031]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:51 compute-0 sudo[156971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbihysuhkaxfobvuyfdsciavehyfoyov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003750.7199028-59-45896211603700/AnsiballZ_systemd_service.py'
Jan 21 13:55:51 compute-0 sudo[156971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:51 compute-0 python3.9[156973]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:51 compute-0 sudo[156971]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:51 compute-0 sudo[157124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rndnrldypnisocttgcddiymyqjcokstp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003751.4960155-59-269959725120881/AnsiballZ_systemd_service.py'
Jan 21 13:55:51 compute-0 sudo[157124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:53 compute-0 python3.9[157126]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:53 compute-0 sudo[157124]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:53 compute-0 sudo[157277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsvluxgscpiagvynoywxsaytsskrwfox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003753.5582259-59-807460716803/AnsiballZ_systemd_service.py'
Jan 21 13:55:53 compute-0 sudo[157277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:54 compute-0 python3.9[157279]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:54 compute-0 sudo[157277]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:54 compute-0 sudo[157430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsevkmhsledcxzbpcncsmdiikuokgwiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003754.4372141-59-74125794790269/AnsiballZ_systemd_service.py'
Jan 21 13:55:54 compute-0 sudo[157430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:54 compute-0 ceph-mon[75031]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:55 compute-0 python3.9[157432]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:55 compute-0 sudo[157430]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:55 compute-0 sudo[157583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuedptfrjazvkconujqvemylmzjfwyzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003755.2656624-59-155735750317922/AnsiballZ_systemd_service.py'
Jan 21 13:55:55 compute-0 sudo[157583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:55 compute-0 python3.9[157585]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:55 compute-0 sudo[157583]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:56 compute-0 ceph-mon[75031]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:56 compute-0 sudo[157736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nursmnjmsfukggcckanbotidnajwrlcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003756.1230347-59-190017413230397/AnsiballZ_systemd_service.py'
Jan 21 13:55:56 compute-0 sudo[157736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:56 compute-0 podman[157738]: 2026-01-21 13:55:56.584967681 +0000 UTC m=+0.109044282 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:55:56 compute-0 python3.9[157739]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 13:55:56 compute-0 sudo[157736]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:57 compute-0 ceph-mon[75031]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:57 compute-0 sudo[157915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idgpretungnxrejaqbrezuffhnkqhcep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003757.1117222-111-89123957756112/AnsiballZ_file.py'
Jan 21 13:55:57 compute-0 sudo[157915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:57 compute-0 python3.9[157917]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:57 compute-0 sudo[157915]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:58 compute-0 sudo[158067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpocutcmexuhyaarqnzgtknzfyuzvygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003757.885752-111-9136446228104/AnsiballZ_file.py'
Jan 21 13:55:58 compute-0 sudo[158067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:58 compute-0 python3.9[158069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:58 compute-0 sudo[158067]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:58 compute-0 sudo[158219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtnvwodbheyctzzmbonpwuozmsogiwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003758.5584137-111-41333970380990/AnsiballZ_file.py'
Jan 21 13:55:58 compute-0 sudo[158219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:59 compute-0 python3.9[158221]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:59 compute-0 sudo[158219]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:59 compute-0 ceph-mon[75031]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:55:59 compute-0 sudo[158371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arryjswegnowchrmknxfxaquxxkscfot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003759.1896117-111-44473940716708/AnsiballZ_file.py'
Jan 21 13:55:59 compute-0 sudo[158371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:55:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:55:59 compute-0 python3.9[158373]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:55:59 compute-0 sudo[158371]: pam_unix(sudo:session): session closed for user root
Jan 21 13:55:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:00 compute-0 sudo[158523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbwasjwdodwodthgouqjuhmhhahxtbca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003759.939507-111-212363468679160/AnsiballZ_file.py'
Jan 21 13:56:00 compute-0 sudo[158523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:00 compute-0 python3.9[158525]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:00 compute-0 sudo[158523]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:00 compute-0 sudo[158675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstlqdhfjtgolkhycazqdixdzypiodfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003760.6312628-111-202252306685720/AnsiballZ_file.py'
Jan 21 13:56:00 compute-0 sudo[158675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:01 compute-0 python3.9[158677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:01 compute-0 ceph-mon[75031]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:01 compute-0 sudo[158675]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:01 compute-0 sudo[158827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtwuithqmjobxaapntlchhycknphloio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003761.330492-111-14888406928917/AnsiballZ_file.py'
Jan 21 13:56:01 compute-0 sudo[158827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:01 compute-0 python3.9[158829]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:01 compute-0 sudo[158827]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:02 compute-0 podman[158929]: 2026-01-21 13:56:02.367375386 +0000 UTC m=+0.092663757 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 13:56:02 compute-0 sudo[158998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjmlokdexueblpnizaunyuipkqvmhnvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003762.0946255-161-93683928832220/AnsiballZ_file.py'
Jan 21 13:56:02 compute-0 sudo[158998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:02 compute-0 python3.9[159000]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:02 compute-0 sudo[158998]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:03 compute-0 sudo[159150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqwmiddkjlcomdahdtmvxrgqvsspyyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003762.742848-161-113325229986288/AnsiballZ_file.py'
Jan 21 13:56:03 compute-0 sudo[159150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:03 compute-0 ceph-mon[75031]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:03 compute-0 python3.9[159152]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:03 compute-0 sudo[159150]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:03 compute-0 sudo[159302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvclhzrxdsrfnqikdwrbquoresmzvquv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003763.3968515-161-80278708033573/AnsiballZ_file.py'
Jan 21 13:56:03 compute-0 sudo[159302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:03 compute-0 python3.9[159304]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:03 compute-0 sudo[159302]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:04 compute-0 sudo[159454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjwnubmzpafeltaoewxuoydnczorsgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003764.0658624-161-245761074774684/AnsiballZ_file.py'
Jan 21 13:56:04 compute-0 sudo[159454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:04 compute-0 python3.9[159456]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:04 compute-0 sudo[159454]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:05 compute-0 sudo[159606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtqunejcetkoeskvhsfhqmrwymjembgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003764.8124723-161-198100180034203/AnsiballZ_file.py'
Jan 21 13:56:05 compute-0 sudo[159606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:05 compute-0 ceph-mon[75031]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:05 compute-0 python3.9[159608]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:05 compute-0 sudo[159606]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:05 compute-0 sudo[159758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emfcqqeoqduclhfbfoodjcfibwtavomk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003765.4538238-161-205780064118869/AnsiballZ_file.py'
Jan 21 13:56:05 compute-0 sudo[159758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:05 compute-0 python3.9[159760]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:05 compute-0 sudo[159758]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:06 compute-0 sudo[159910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxzisfjmjlkucvhmucolkhxnoydnakn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003766.0542953-161-133108860954414/AnsiballZ_file.py'
Jan 21 13:56:06 compute-0 sudo[159910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:06 compute-0 python3.9[159912]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:56:06 compute-0 sudo[159910]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:07 compute-0 sudo[160062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tagiiitmtioaccucifegzgpzeakxxhvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003766.795971-212-26758769126950/AnsiballZ_command.py'
Jan 21 13:56:07 compute-0 sudo[160062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:07 compute-0 ceph-mon[75031]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:07 compute-0 python3.9[160064]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:07 compute-0 sudo[160062]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:08 compute-0 python3.9[160216]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 13:56:08 compute-0 sudo[160366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayitpxbhnfpcqupumrdbigxxbjkzqhkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003768.5192125-230-125174757374441/AnsiballZ_systemd_service.py'
Jan 21 13:56:08 compute-0 sudo[160366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:09 compute-0 python3.9[160368]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 13:56:09 compute-0 systemd[1]: Reloading.
Jan 21 13:56:09 compute-0 systemd-sysv-generator[160399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:56:09 compute-0 systemd-rc-local-generator[160396]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:56:09 compute-0 ceph-mon[75031]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:09 compute-0 sudo[160366]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:09 compute-0 sudo[160553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbbwbcxwsvbugvfjijmnqaooiolfhhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003769.6674724-238-137054936749962/AnsiballZ_command.py'
Jan 21 13:56:10 compute-0 sudo[160553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:10 compute-0 python3.9[160555]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:10 compute-0 sudo[160553]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:10 compute-0 sudo[160706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvceadvtmrxgjimduvtaboijeexldgfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003770.3879087-238-99226500319429/AnsiballZ_command.py'
Jan 21 13:56:10 compute-0 sudo[160706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:10 compute-0 python3.9[160708]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:10 compute-0 sudo[160706]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:56:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:56:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:56:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:56:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:56:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:56:11 compute-0 sudo[160859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exwawayvfngyraljiipfnjckmwnvxhcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003771.035439-238-142252481112531/AnsiballZ_command.py'
Jan 21 13:56:11 compute-0 sudo[160859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:11 compute-0 python3.9[160861]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:11 compute-0 sudo[160859]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:12 compute-0 ceph-mon[75031]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:12 compute-0 sudo[161012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-belpdvwicwwutqpjvlbgbrwqhvayvcix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003772.0950086-238-180987519324025/AnsiballZ_command.py'
Jan 21 13:56:12 compute-0 sudo[161012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:12 compute-0 python3.9[161014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:12 compute-0 sudo[161012]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:13 compute-0 sudo[161165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnyvynvzkdccoaqxnbkpomnptyggvga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003772.8060596-238-245544142767147/AnsiballZ_command.py'
Jan 21 13:56:13 compute-0 sudo[161165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:13 compute-0 python3.9[161167]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:13 compute-0 ceph-mon[75031]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:13 compute-0 sudo[161165]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:13 compute-0 sudo[161318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gafwgwcyqqadtgffkazvcnlkaldtbevx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003773.4302003-238-157390434353273/AnsiballZ_command.py'
Jan 21 13:56:13 compute-0 sudo[161318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:13 compute-0 python3.9[161320]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:13 compute-0 sudo[161318]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:14 compute-0 sudo[161471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygozmefjsfmwhaammdzsubijinexvxsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003774.1443415-238-138290332641352/AnsiballZ_command.py'
Jan 21 13:56:14 compute-0 sudo[161471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:14 compute-0 python3.9[161473]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:56:14 compute-0 sudo[161471]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:15 compute-0 ceph-mon[75031]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:15 compute-0 sudo[161624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppostdmynorauexhquzffukxviizctfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003775.0726492-292-125449308777989/AnsiballZ_getent.py'
Jan 21 13:56:15 compute-0 sudo[161624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:15 compute-0 python3.9[161626]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 21 13:56:15 compute-0 sudo[161624]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:16 compute-0 sudo[161777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjvarszjrjrrbzkgczzuvgwvylbrzig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003775.963063-300-200535893857455/AnsiballZ_group.py'
Jan 21 13:56:16 compute-0 sudo[161777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:16 compute-0 python3.9[161779]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 13:56:16 compute-0 groupadd[161780]: group added to /etc/group: name=libvirt, GID=42473
Jan 21 13:56:16 compute-0 groupadd[161780]: group added to /etc/gshadow: name=libvirt
Jan 21 13:56:16 compute-0 groupadd[161780]: new group: name=libvirt, GID=42473
Jan 21 13:56:16 compute-0 sudo[161777]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:17 compute-0 ceph-mon[75031]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:17 compute-0 sudo[161935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlqyxuzlvgubxyymfsqvsvfvvruwcamh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003776.8861566-308-240283699472205/AnsiballZ_user.py'
Jan 21 13:56:17 compute-0 sudo[161935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:17 compute-0 python3.9[161937]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 13:56:17 compute-0 useradd[161939]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 21 13:56:17 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:56:17 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 13:56:17 compute-0 sudo[161935]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:18 compute-0 sudo[162096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krknztvcvsgmbhvundqvdgmzkxssyktf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003778.0916805-319-123449900694630/AnsiballZ_setup.py'
Jan 21 13:56:18 compute-0 sudo[162096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:18 compute-0 python3.9[162098]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 13:56:18 compute-0 sudo[162096]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:19 compute-0 ceph-mon[75031]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:19 compute-0 sudo[162180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylwopkpcvmytojqrtuvuuynhvobnxwlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003778.0916805-319-123449900694630/AnsiballZ_dnf.py'
Jan 21 13:56:19 compute-0 sudo[162180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:56:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:19 compute-0 python3.9[162182]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 13:56:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:21 compute-0 ceph-mon[75031]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:23 compute-0 ceph-mon[75031]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:25 compute-0 ceph-mon[75031]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:27 compute-0 ceph-mon[75031]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:27 compute-0 podman[162195]: 2026-01-21 13:56:27.389136084 +0000 UTC m=+0.108877763 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 21 13:56:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:30 compute-0 ceph-mon[75031]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:31 compute-0 ceph-mon[75031]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:33 compute-0 podman[162219]: 2026-01-21 13:56:33.326257915 +0000 UTC m=+0.053013238 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 21 13:56:33 compute-0 ceph-mon[75031]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 21 13:56:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:56:33.886 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 13:56:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:56:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 13:56:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:56:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 13:56:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:35 compute-0 sudo[162254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:56:35 compute-0 sudo[162254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:35 compute-0 sudo[162254]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:35 compute-0 sudo[162282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 13:56:35 compute-0 sudo[162282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:35 compute-0 ceph-mon[75031]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 21 13:56:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:35 compute-0 podman[162363]: 2026-01-21 13:56:35.9414882 +0000 UTC m=+0.117413689 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 13:56:36 compute-0 podman[162363]: 2026-01-21 13:56:36.062960025 +0000 UTC m=+0.238885524 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:56:36 compute-0 sudo[162282]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:56:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:56:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:37 compute-0 sudo[162573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:56:37 compute-0 sudo[162573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:37 compute-0 sudo[162573]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:37 compute-0 sudo[162600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:56:37 compute-0 sudo[162600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:37 compute-0 sudo[162600]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:56:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:56:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:56:37 compute-0 sudo[162676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:56:37 compute-0 sudo[162676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:37 compute-0 sudo[162676]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:37 compute-0 sudo[162703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:56:37 compute-0 sudo[162703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.124284448 +0000 UTC m=+0.058573431 container create 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 21 13:56:38 compute-0 systemd[1]: Started libpod-conmon-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope.
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.092732958 +0000 UTC m=+0.027022041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:56:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.22088794 +0000 UTC m=+0.155176943 container init 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.230704115 +0000 UTC m=+0.164993098 container start 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.234176243 +0000 UTC m=+0.168465246 container attach 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 21 13:56:38 compute-0 wonderful_moore[162767]: 167 167
Jan 21 13:56:38 compute-0 systemd[1]: libpod-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope: Deactivated successfully.
Jan 21 13:56:38 compute-0 conmon[162767]: conmon 9260d4494541e4fcf093 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope/container/memory.events
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.239682494 +0000 UTC m=+0.173971497 container died 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:56:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-15d4a27de7b7de94f054afd36c35823353887a0432260cbe8cf96577386c4488-merged.mount: Deactivated successfully.
Jan 21 13:56:38 compute-0 podman[162748]: 2026-01-21 13:56:38.281894566 +0000 UTC m=+0.216183549 container remove 9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:56:38 compute-0 systemd[1]: libpod-conmon-9260d4494541e4fcf093893a337289e991d383237e492bd6cf75c22ae794051d.scope: Deactivated successfully.
Jan 21 13:56:38 compute-0 podman[162799]: 2026-01-21 13:56:38.476702369 +0000 UTC m=+0.069504221 container create 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:56:38 compute-0 systemd[1]: Started libpod-conmon-312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595.scope.
Jan 21 13:56:38 compute-0 podman[162799]: 2026-01-21 13:56:38.438296206 +0000 UTC m=+0.031098158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 13:56:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:56:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:38 compute-0 podman[162799]: 2026-01-21 13:56:38.581126134 +0000 UTC m=+0.173928016 container init 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:56:38 compute-0 podman[162799]: 2026-01-21 13:56:38.59708137 +0000 UTC m=+0.189883222 container start 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:56:38 compute-0 podman[162799]: 2026-01-21 13:56:38.600873058 +0000 UTC m=+0.193675050 container attach 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:56:39 compute-0 reverent_gates[162821]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:56:39 compute-0 reverent_gates[162821]: --> All data devices are unavailable
Jan 21 13:56:39 compute-0 systemd[1]: libpod-312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595.scope: Deactivated successfully.
Jan 21 13:56:39 compute-0 podman[162799]: 2026-01-21 13:56:39.128241594 +0000 UTC m=+0.721043456 container died 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 13:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-eab67e4cafda37f8ea04ce567b4b9acdc01e0133d7a09c079885ffec352aad86-merged.mount: Deactivated successfully.
Jan 21 13:56:39 compute-0 podman[162799]: 2026-01-21 13:56:39.17958188 +0000 UTC m=+0.772383732 container remove 312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 13:56:39 compute-0 systemd[1]: libpod-conmon-312b4bac6d40087a5df491820a07d31f219b4cbc6a4a09ee6e410b3ebca2e595.scope: Deactivated successfully.
Jan 21 13:56:39 compute-0 sudo[162703]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:39 compute-0 sudo[162876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:56:39 compute-0 sudo[162876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:39 compute-0 sudo[162876]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:39 compute-0 sudo[162903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:56:39 compute-0 sudo[162903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:39 compute-0 ceph-mon[75031]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:56:39
Jan 21 13:56:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:56:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:56:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta']
Jan 21 13:56:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.695030117 +0000 UTC m=+0.082799934 container create 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 13:56:39 compute-0 systemd[1]: Started libpod-conmon-1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3.scope.
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.650990539 +0000 UTC m=+0.038760306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:56:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:56:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.906184439 +0000 UTC m=+0.293954206 container init 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.917890912 +0000 UTC m=+0.305660649 container start 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.922504346 +0000 UTC m=+0.310274073 container attach 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:56:39 compute-0 sharp_heisenberg[162967]: 167 167
Jan 21 13:56:39 compute-0 systemd[1]: libpod-1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3.scope: Deactivated successfully.
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.926634154 +0000 UTC m=+0.314403881 container died 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-94aa4b32ee3c46175c93f4bb7a7295aa1e493d6713fbe6bd0a111dad168a1a37-merged.mount: Deactivated successfully.
Jan 21 13:56:39 compute-0 podman[162948]: 2026-01-21 13:56:39.982923073 +0000 UTC m=+0.370692800 container remove 1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:56:40 compute-0 systemd[1]: libpod-conmon-1f524834494e0661f3404b5dcf62ab95d2e7f45c11335fc670d2decc50fb93a3.scope: Deactivated successfully.
Jan 21 13:56:40 compute-0 podman[163000]: 2026-01-21 13:56:40.201706461 +0000 UTC m=+0.053305487 container create 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:56:40 compute-0 systemd[1]: Started libpod-conmon-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope.
Jan 21 13:56:40 compute-0 podman[163000]: 2026-01-21 13:56:40.176675444 +0000 UTC m=+0.028274550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:56:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:40 compute-0 podman[163000]: 2026-01-21 13:56:40.312126063 +0000 UTC m=+0.163725099 container init 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:56:40 compute-0 podman[163000]: 2026-01-21 13:56:40.326998945 +0000 UTC m=+0.178597951 container start 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:56:40 compute-0 podman[163000]: 2026-01-21 13:56:40.330671689 +0000 UTC m=+0.182270695 container attach 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:56:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:40 compute-0 sharp_mendel[163019]: {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:     "0": [
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:         {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "devices": [
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "/dev/loop3"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             ],
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_name": "ceph_lv0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_size": "21470642176",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "name": "ceph_lv0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "tags": {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cluster_name": "ceph",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.crush_device_class": "",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.encrypted": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.objectstore": "bluestore",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osd_id": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.type": "block",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.vdo": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.with_tpm": "0"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             },
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "type": "block",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "vg_name": "ceph_vg0"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:         }
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:     ],
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:     "1": [
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:         {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "devices": [
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "/dev/loop4"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             ],
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_name": "ceph_lv1",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_size": "21470642176",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "name": "ceph_lv1",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "tags": {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cluster_name": "ceph",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.crush_device_class": "",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.encrypted": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.objectstore": "bluestore",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osd_id": "1",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.type": "block",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.vdo": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.with_tpm": "0"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             },
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "type": "block",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "vg_name": "ceph_vg1"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:         }
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:     ],
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:     "2": [
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:         {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "devices": [
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "/dev/loop5"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             ],
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_name": "ceph_lv2",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_size": "21470642176",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "name": "ceph_lv2",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "tags": {
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.cluster_name": "ceph",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.crush_device_class": "",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.encrypted": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.objectstore": "bluestore",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osd_id": "2",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.type": "block",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.vdo": "0",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:                 "ceph.with_tpm": "0"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             },
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "type": "block",
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:             "vg_name": "ceph_vg2"
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:         }
Jan 21 13:56:40 compute-0 sharp_mendel[163019]:     ]
Jan 21 13:56:40 compute-0 sharp_mendel[163019]: }
Jan 21 13:56:40 compute-0 systemd[1]: libpod-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope: Deactivated successfully.
Jan 21 13:56:40 compute-0 conmon[163019]: conmon 91a49741916649ec12b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope/container/memory.events
Jan 21 13:56:40 compute-0 podman[163035]: 2026-01-21 13:56:40.720020497 +0000 UTC m=+0.030365804 container died 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 13:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc66562db557b3e3aaba1201735eb9f3254db76c326117a041288836cb4002e8-merged.mount: Deactivated successfully.
Jan 21 13:56:40 compute-0 podman[163035]: 2026-01-21 13:56:40.785966476 +0000 UTC m=+0.096311703 container remove 91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:56:40 compute-0 systemd[1]: libpod-conmon-91a49741916649ec12b7c7b4b7b76e07bc2b57a87d990132aac168671816f819.scope: Deactivated successfully.
Jan 21 13:56:40 compute-0 sudo[162903]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:40 compute-0 sudo[163050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:56:40 compute-0 sudo[163050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:40 compute-0 sudo[163050]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:40 compute-0 sudo[163075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:56:40 compute-0 sudo[163075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:56:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:56:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:56:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.240398258 +0000 UTC m=+0.021224561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.440902638 +0000 UTC m=+0.221728951 container create d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:56:41 compute-0 systemd[1]: Started libpod-conmon-d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907.scope.
Jan 21 13:56:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.531514464 +0000 UTC m=+0.312340807 container init d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.538448779 +0000 UTC m=+0.319275112 container start d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.542735403 +0000 UTC m=+0.323561686 container attach d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 13:56:41 compute-0 confident_buck[163128]: 167 167
Jan 21 13:56:41 compute-0 systemd[1]: libpod-d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907.scope: Deactivated successfully.
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.546796719 +0000 UTC m=+0.327622992 container died d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:56:41 compute-0 ceph-mon[75031]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b935e79bd7fd359272612957be07fd42f2c76b48956777a118b602b67482e320-merged.mount: Deactivated successfully.
Jan 21 13:56:41 compute-0 podman[163112]: 2026-01-21 13:56:41.60283544 +0000 UTC m=+0.383661723 container remove d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_buck, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 13:56:41 compute-0 systemd[1]: libpod-conmon-d81339012a31434600e1231eaaea0c5274fcac1f605736eaa158cca3afd55907.scope: Deactivated successfully.
Jan 21 13:56:41 compute-0 podman[163152]: 2026-01-21 13:56:41.843279402 +0000 UTC m=+0.071483492 container create 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:56:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:41 compute-0 systemd[1]: Started libpod-conmon-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope.
Jan 21 13:56:41 compute-0 podman[163152]: 2026-01-21 13:56:41.812425493 +0000 UTC m=+0.040629643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:56:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:56:41 compute-0 podman[163152]: 2026-01-21 13:56:41.945221529 +0000 UTC m=+0.173425609 container init 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 13:56:41 compute-0 podman[163152]: 2026-01-21 13:56:41.953603419 +0000 UTC m=+0.181807469 container start 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 13:56:41 compute-0 podman[163152]: 2026-01-21 13:56:41.958454491 +0000 UTC m=+0.186658541 container attach 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 13:56:42 compute-0 lvm[163248]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:56:42 compute-0 lvm[163248]: VG ceph_vg1 finished
Jan 21 13:56:42 compute-0 lvm[163247]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:56:42 compute-0 lvm[163247]: VG ceph_vg0 finished
Jan 21 13:56:42 compute-0 lvm[163250]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:56:42 compute-0 lvm[163250]: VG ceph_vg2 finished
Jan 21 13:56:42 compute-0 nice_noyce[163169]: {}
Jan 21 13:56:42 compute-0 ceph-mon[75031]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:42 compute-0 systemd[1]: libpod-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope: Deactivated successfully.
Jan 21 13:56:42 compute-0 podman[163152]: 2026-01-21 13:56:42.750525794 +0000 UTC m=+0.978729864 container died 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:56:42 compute-0 systemd[1]: libpod-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope: Consumed 1.214s CPU time.
Jan 21 13:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5655272caa7303084f3935415d93f13f633e8f3c640bc634710394908175978f-merged.mount: Deactivated successfully.
Jan 21 13:56:42 compute-0 podman[163152]: 2026-01-21 13:56:42.804452499 +0000 UTC m=+1.032656579 container remove 51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:56:42 compute-0 systemd[1]: libpod-conmon-51a471c99d14246e0ba43d7079a91953530128d5963d8c3b5614bf01ecc1bede.scope: Deactivated successfully.
Jan 21 13:56:42 compute-0 sudo[163075]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:56:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:56:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:42 compute-0 sudo[163267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:56:42 compute-0 sudo[163267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:56:42 compute-0 sudo[163267]: pam_unix(sudo:session): session closed for user root
Jan 21 13:56:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:56:45 compute-0 ceph-mon[75031]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 13:56:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 21 13:56:47 compute-0 ceph-mon[75031]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 21 13:56:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:49 compute-0 ceph-mon[75031]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:56:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:56:51 compute-0 ceph-mon[75031]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:53 compute-0 ceph-mon[75031]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:54 compute-0 ceph-mon[75031]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:55 compute-0 kernel: SELinux:  Converting 2774 SID table entries...
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:56:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:56:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:56:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:57 compute-0 ceph-mon[75031]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:58 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 21 13:56:58 compute-0 podman[163308]: 2026-01-21 13:56:58.381789236 +0000 UTC m=+0.097350786 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 21 13:56:58 compute-0 ceph-mon[75031]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:56:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:01 compute-0 ceph-mon[75031]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:02 compute-0 ceph-mon[75031]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:04 compute-0 podman[163340]: 2026-01-21 13:57:04.34150534 +0000 UTC m=+0.070544910 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:57:05 compute-0 ceph-mon[75031]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:05 compute-0 kernel: SELinux:  Converting 2774 SID table entries...
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:57:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:57:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:07 compute-0 ceph-mon[75031]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:09 compute-0 ceph-mon[75031]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:57:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:57:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:57:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:57:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:57:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:57:11 compute-0 ceph-mon[75031]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:13 compute-0 ceph-mon[75031]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:15 compute-0 ceph-mon[75031]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:17 compute-0 ceph-mon[75031]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:19 compute-0 ceph-mon[75031]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:21 compute-0 ceph-mon[75031]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:23 compute-0 ceph-mon[75031]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:25 compute-0 ceph-mon[75031]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:27 compute-0 ceph-mon[75031]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:28 compute-0 ceph-mon[75031]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:29 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 21 13:57:29 compute-0 podman[170169]: 2026-01-21 13:57:29.381965548 +0000 UTC m=+0.088129993 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 13:57:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:30 compute-0 ceph-mon[75031]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:32 compute-0 ceph-mon[75031]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:57:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 13:57:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:57:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 13:57:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:57:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 13:57:35 compute-0 ceph-mon[75031]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:35 compute-0 podman[173869]: 2026-01-21 13:57:35.344411796 +0000 UTC m=+0.075432103 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 13:57:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:37 compute-0 ceph-mon[75031]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:57:39
Jan 21 13:57:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:57:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:57:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images']
Jan 21 13:57:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:57:39 compute-0 ceph-mon[75031]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:57:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:57:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:57:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:57:41 compute-0 ceph-mon[75031]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:43 compute-0 sudo[178382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:57:43 compute-0 sudo[178382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:43 compute-0 sudo[178382]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:43 compute-0 sudo[178452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:57:43 compute-0 sudo[178452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:43 compute-0 sudo[178452]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:57:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:57:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:57:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:57:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:57:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:57:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:57:43 compute-0 sudo[178893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:57:43 compute-0 sudo[178893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:43 compute-0 sudo[178893]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:43 compute-0 sudo[178956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:57:43 compute-0 sudo[178956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.06591568 +0000 UTC m=+0.048485080 container create b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 13:57:44 compute-0 systemd[1]: Started libpod-conmon-b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4.scope.
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.039910597 +0000 UTC m=+0.022480027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.226300306 +0000 UTC m=+0.208869726 container init b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.233969572 +0000 UTC m=+0.216539012 container start b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:57:44 compute-0 bold_burnell[179252]: 167 167
Jan 21 13:57:44 compute-0 systemd[1]: libpod-b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4.scope: Deactivated successfully.
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.241010873 +0000 UTC m=+0.223580303 container attach b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.241494405 +0000 UTC m=+0.224063845 container died b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:57:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc46d0c308f97cf0f0ed84b1ad02615395755fc8f4a193726b95fa3ef6405cc-merged.mount: Deactivated successfully.
Jan 21 13:57:44 compute-0 podman[179154]: 2026-01-21 13:57:44.322950704 +0000 UTC m=+0.305520104 container remove b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_burnell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 13:57:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:57:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:57:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:57:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:57:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:57:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:57:44 compute-0 systemd[1]: libpod-conmon-b6d69a9d7cb655afae5a27b8c41eaf78fea0901986be90774b23dc472a4044a4.scope: Deactivated successfully.
Jan 21 13:57:44 compute-0 podman[179491]: 2026-01-21 13:57:44.484283544 +0000 UTC m=+0.039232474 container create 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 13:57:44 compute-0 systemd[1]: Started libpod-conmon-8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61.scope.
Jan 21 13:57:44 compute-0 podman[179491]: 2026-01-21 13:57:44.469405222 +0000 UTC m=+0.024354162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:44 compute-0 podman[179491]: 2026-01-21 13:57:44.590980686 +0000 UTC m=+0.145929726 container init 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 13:57:44 compute-0 podman[179491]: 2026-01-21 13:57:44.601822948 +0000 UTC m=+0.156771888 container start 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 13:57:44 compute-0 podman[179491]: 2026-01-21 13:57:44.619657532 +0000 UTC m=+0.174606802 container attach 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:57:45 compute-0 flamboyant_brown[179558]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:57:45 compute-0 flamboyant_brown[179558]: --> All data devices are unavailable
Jan 21 13:57:45 compute-0 systemd[1]: libpod-8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61.scope: Deactivated successfully.
Jan 21 13:57:45 compute-0 podman[179491]: 2026-01-21 13:57:45.104802909 +0000 UTC m=+0.659751849 container died 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:57:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:46 compute-0 ceph-mon[75031]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-610371be79bae70019ef50089ad1c9c79eef399a02e7d418b2582ca70596bd69-merged.mount: Deactivated successfully.
Jan 21 13:57:47 compute-0 podman[179491]: 2026-01-21 13:57:47.289081124 +0000 UTC m=+2.844030104 container remove 8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:57:47 compute-0 systemd[1]: libpod-conmon-8a9415897147e83f8620b1e3f0c22d5f854b8a2d574897799774c5d8444d4d61.scope: Deactivated successfully.
Jan 21 13:57:47 compute-0 sudo[178956]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:47 compute-0 sudo[180500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:57:47 compute-0 sudo[180500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:47 compute-0 sudo[180500]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:47 compute-0 sudo[180525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:57:47 compute-0 sudo[180525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:47 compute-0 podman[180563]: 2026-01-21 13:57:47.866318678 +0000 UTC m=+0.087513327 container create 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:57:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:47 compute-0 podman[180563]: 2026-01-21 13:57:47.819217413 +0000 UTC m=+0.040412092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:57:47 compute-0 systemd[1]: Started libpod-conmon-950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8.scope.
Jan 21 13:57:47 compute-0 ceph-mon[75031]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:57:48 compute-0 podman[180563]: 2026-01-21 13:57:48.037092577 +0000 UTC m=+0.258287256 container init 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:57:48 compute-0 podman[180563]: 2026-01-21 13:57:48.050099753 +0000 UTC m=+0.271294402 container start 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:57:48 compute-0 podman[180563]: 2026-01-21 13:57:48.054128781 +0000 UTC m=+0.275323450 container attach 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 13:57:48 compute-0 objective_mclaren[180580]: 167 167
Jan 21 13:57:48 compute-0 systemd[1]: libpod-950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8.scope: Deactivated successfully.
Jan 21 13:57:48 compute-0 podman[180563]: 2026-01-21 13:57:48.056867257 +0000 UTC m=+0.278061966 container died 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-77fc014a14235b371651e1301b86b7cd028b157dc86c186c6863416377df5dc7-merged.mount: Deactivated successfully.
Jan 21 13:57:48 compute-0 podman[180563]: 2026-01-21 13:57:48.154397807 +0000 UTC m=+0.375592486 container remove 950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mclaren, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:57:48 compute-0 systemd[1]: libpod-conmon-950576de85d33a3f533cd416d21a2fc0d245d173964992389e77d07e608c4dc8.scope: Deactivated successfully.
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.336595723 +0000 UTC m=+0.054374822 container create 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:57:48 compute-0 systemd[1]: Started libpod-conmon-02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c.scope.
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.307483726 +0000 UTC m=+0.025262865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:57:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.427681786 +0000 UTC m=+0.145460895 container init 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.445119819 +0000 UTC m=+0.162898928 container start 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.456505537 +0000 UTC m=+0.174284656 container attach 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:57:48 compute-0 awesome_dirac[180620]: {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:     "0": [
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:         {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "devices": [
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "/dev/loop3"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             ],
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_name": "ceph_lv0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_size": "21470642176",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "name": "ceph_lv0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "tags": {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cluster_name": "ceph",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.crush_device_class": "",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.encrypted": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.objectstore": "bluestore",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osd_id": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.type": "block",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.vdo": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.with_tpm": "0"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             },
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "type": "block",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "vg_name": "ceph_vg0"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:         }
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:     ],
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:     "1": [
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:         {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "devices": [
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "/dev/loop4"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             ],
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_name": "ceph_lv1",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_size": "21470642176",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "name": "ceph_lv1",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "tags": {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cluster_name": "ceph",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.crush_device_class": "",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.encrypted": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.objectstore": "bluestore",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osd_id": "1",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.type": "block",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.vdo": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.with_tpm": "0"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             },
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "type": "block",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "vg_name": "ceph_vg1"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:         }
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:     ],
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:     "2": [
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:         {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "devices": [
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "/dev/loop5"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             ],
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_name": "ceph_lv2",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_size": "21470642176",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "name": "ceph_lv2",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "tags": {
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.cluster_name": "ceph",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.crush_device_class": "",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.encrypted": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.objectstore": "bluestore",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osd_id": "2",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.type": "block",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.vdo": "0",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:                 "ceph.with_tpm": "0"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             },
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "type": "block",
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:             "vg_name": "ceph_vg2"
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:         }
Jan 21 13:57:48 compute-0 awesome_dirac[180620]:     ]
Jan 21 13:57:48 compute-0 awesome_dirac[180620]: }
Jan 21 13:57:48 compute-0 systemd[1]: libpod-02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c.scope: Deactivated successfully.
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.727273364 +0000 UTC m=+0.445052463 container died 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff56d23d8093e0db83c6f1b27708ee3bd96cf31b9faaff578200c682270f5d4-merged.mount: Deactivated successfully.
Jan 21 13:57:48 compute-0 podman[180603]: 2026-01-21 13:57:48.782071336 +0000 UTC m=+0.499850435 container remove 02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:57:48 compute-0 systemd[1]: libpod-conmon-02efcc63f0f76a2e8efe03a8ad8c20c2ce1e1966fd59c788881c4998ac83347c.scope: Deactivated successfully.
Jan 21 13:57:48 compute-0 sudo[180525]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:48 compute-0 sudo[180643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:57:48 compute-0 sudo[180643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:48 compute-0 sudo[180643]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:48 compute-0 sudo[180668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:57:48 compute-0 sudo[180668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.191185995 +0000 UTC m=+0.037334648 container create f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:57:49 compute-0 systemd[1]: Started libpod-conmon-f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869.scope.
Jan 21 13:57:49 compute-0 ceph-mon[75031]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.173545517 +0000 UTC m=+0.019694150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.283058347 +0000 UTC m=+0.129207040 container init f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.292291201 +0000 UTC m=+0.138439824 container start f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.296588176 +0000 UTC m=+0.142736809 container attach f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 13:57:49 compute-0 intelligent_blackburn[180722]: 167 167
Jan 21 13:57:49 compute-0 systemd[1]: libpod-f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869.scope: Deactivated successfully.
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.298798979 +0000 UTC m=+0.144947642 container died f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 21 13:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d29fb1b50d2e96b3b90d4a0c4e8be3565938abc4311ff804ddf00233afaf2609-merged.mount: Deactivated successfully.
Jan 21 13:57:49 compute-0 podman[180705]: 2026-01-21 13:57:49.397751183 +0000 UTC m=+0.243899806 container remove f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:57:49 compute-0 systemd[1]: libpod-conmon-f42a8e380dcecbd9ea6ae037da1cfe86fcf377dea1e88bfb0376d7f07ed6b869.scope: Deactivated successfully.
Jan 21 13:57:49 compute-0 podman[180748]: 2026-01-21 13:57:49.552233836 +0000 UTC m=+0.042812010 container create 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:57:49 compute-0 systemd[1]: Started libpod-conmon-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope.
Jan 21 13:57:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:57:49 compute-0 podman[180748]: 2026-01-21 13:57:49.533791928 +0000 UTC m=+0.024370102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:57:49 compute-0 podman[180748]: 2026-01-21 13:57:49.688776833 +0000 UTC m=+0.179355027 container init 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Jan 21 13:57:49 compute-0 podman[180748]: 2026-01-21 13:57:49.696530321 +0000 UTC m=+0.187108475 container start 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:57:49 compute-0 podman[180748]: 2026-01-21 13:57:49.700060208 +0000 UTC m=+0.190638382 container attach 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 13:57:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:50 compute-0 lvm[180847]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:57:50 compute-0 lvm[180847]: VG ceph_vg0 finished
Jan 21 13:57:50 compute-0 lvm[180848]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:57:50 compute-0 lvm[180848]: VG ceph_vg1 finished
Jan 21 13:57:50 compute-0 lvm[180850]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:57:50 compute-0 lvm[180850]: VG ceph_vg2 finished
Jan 21 13:57:50 compute-0 lvm[180851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:57:50 compute-0 lvm[180851]: VG ceph_vg0 finished
Jan 21 13:57:50 compute-0 lvm[180853]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:57:50 compute-0 lvm[180853]: VG ceph_vg2 finished
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:57:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:57:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:50 compute-0 lvm[180854]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:57:50 compute-0 lvm[180854]: VG ceph_vg2 finished
Jan 21 13:57:50 compute-0 flamboyant_heyrovsky[180769]: {}
Jan 21 13:57:50 compute-0 systemd[1]: libpod-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope: Deactivated successfully.
Jan 21 13:57:50 compute-0 systemd[1]: libpod-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope: Consumed 1.276s CPU time.
Jan 21 13:57:50 compute-0 podman[180748]: 2026-01-21 13:57:50.568434764 +0000 UTC m=+1.059012928 container died 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fdb44bb2a62ffe67e6276dfad76ae8751933f0a207f27519e6321feee2bd929-merged.mount: Deactivated successfully.
Jan 21 13:57:50 compute-0 podman[180748]: 2026-01-21 13:57:50.690307595 +0000 UTC m=+1.180885749 container remove 59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heyrovsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 13:57:50 compute-0 systemd[1]: libpod-conmon-59221d7483c6465e750bbc185c4696b34de5b5db5ef1f60ab09deed7203c1fdd.scope: Deactivated successfully.
Jan 21 13:57:50 compute-0 sudo[180668]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:57:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:57:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:57:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:57:50 compute-0 sudo[180876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:57:50 compute-0 sudo[180876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:57:50 compute-0 sudo[180876]: pam_unix(sudo:session): session closed for user root
Jan 21 13:57:51 compute-0 ceph-mon[75031]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:57:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:57:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:53 compute-0 ceph-mon[75031]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:57:55 compute-0 ceph-mon[75031]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:57 compute-0 ceph-mon[75031]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:57:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:00 compute-0 podman[180906]: 2026-01-21 13:58:00.404464035 +0000 UTC m=+0.116692936 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 13:58:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:00 compute-0 ceph-mon[75031]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:01 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 21 13:58:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 21 13:58:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:02 compute-0 ceph-mon[75031]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:03 compute-0 ceph-mon[75031]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:03 compute-0 groupadd[180941]: group added to /etc/group: name=dnsmasq, GID=992
Jan 21 13:58:03 compute-0 groupadd[180941]: group added to /etc/gshadow: name=dnsmasq
Jan 21 13:58:03 compute-0 groupadd[180941]: new group: name=dnsmasq, GID=992
Jan 21 13:58:03 compute-0 useradd[180948]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 21 13:58:03 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:58:03 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 21 13:58:03 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Jan 21 13:58:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:04 compute-0 groupadd[180961]: group added to /etc/group: name=clevis, GID=991
Jan 21 13:58:04 compute-0 groupadd[180961]: group added to /etc/gshadow: name=clevis
Jan 21 13:58:04 compute-0 groupadd[180961]: new group: name=clevis, GID=991
Jan 21 13:58:04 compute-0 useradd[180968]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 21 13:58:04 compute-0 usermod[180978]: add 'clevis' to group 'tss'
Jan 21 13:58:04 compute-0 usermod[180978]: add 'clevis' to shadow group 'tss'
Jan 21 13:58:05 compute-0 ceph-mon[75031]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:06 compute-0 podman[180988]: 2026-01-21 13:58:06.389410784 +0000 UTC m=+0.089118886 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 21 13:58:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:08 compute-0 ceph-mon[75031]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:09 compute-0 ceph-mon[75031]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:10 compute-0 polkitd[43343]: Reloading rules
Jan 21 13:58:10 compute-0 polkitd[43343]: Collecting garbage unconditionally...
Jan 21 13:58:10 compute-0 polkitd[43343]: Loading rules from directory /etc/polkit-1/rules.d
Jan 21 13:58:10 compute-0 polkitd[43343]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 21 13:58:10 compute-0 polkitd[43343]: Finished loading, compiling and executing 3 rules
Jan 21 13:58:10 compute-0 polkitd[43343]: Reloading rules
Jan 21 13:58:10 compute-0 polkitd[43343]: Collecting garbage unconditionally...
Jan 21 13:58:10 compute-0 polkitd[43343]: Loading rules from directory /etc/polkit-1/rules.d
Jan 21 13:58:10 compute-0 polkitd[43343]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 21 13:58:10 compute-0 polkitd[43343]: Finished loading, compiling and executing 3 rules
Jan 21 13:58:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:58:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:58:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:58:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:58:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:58:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:58:11 compute-0 ceph-mon[75031]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:13 compute-0 ceph-mon[75031]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.378203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003894378233, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3568840, "memory_usage": 3621936, "flush_reason": "Manual Compaction"}
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003894568582, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3482571, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9711, "largest_seqno": 11748, "table_properties": {"data_size": 3473314, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17768, "raw_average_key_size": 19, "raw_value_size": 3454962, "raw_average_value_size": 3780, "num_data_blocks": 267, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003660, "oldest_key_time": 1769003660, "file_creation_time": 1769003894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 190423 microseconds, and 5937 cpu microseconds.
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.568625) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3482571 bytes OK
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.568643) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.802823) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.802867) EVENT_LOG_v1 {"time_micros": 1769003894802858, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.802892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3560347, prev total WAL file size 3560347, number of live WAL files 2.
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.804028) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3400KB)], [26(6025KB)]
Jan 21 13:58:14 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003894804074, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9652888, "oldest_snapshot_seqno": -1}
Jan 21 13:58:15 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3705 keys, 8083234 bytes, temperature: kUnknown
Jan 21 13:58:15 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003895882613, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8083234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8054890, "index_size": 17994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88907, "raw_average_key_size": 23, "raw_value_size": 7984453, "raw_average_value_size": 2155, "num_data_blocks": 779, "num_entries": 3705, "num_filter_entries": 3705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769003894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 21 13:58:15 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 13:58:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:15.882960) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8083234 bytes
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.955230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.9 rd, 7.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4219, records dropped: 514 output_compression: NoCompression
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.955295) EVENT_LOG_v1 {"time_micros": 1769003896955271, "job": 10, "event": "compaction_finished", "compaction_time_micros": 1078675, "compaction_time_cpu_micros": 19323, "output_level": 6, "num_output_files": 1, "total_output_size": 8083234, "num_input_records": 4219, "num_output_records": 3705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003896956333, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769003896957421, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:14.803949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:58:16 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-13:58:16.957700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 13:58:17 compute-0 ceph-mon[75031]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:18 compute-0 ceph-mon[75031]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:19 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 21 13:58:19 compute-0 sshd[1003]: Received signal 15; terminating.
Jan 21 13:58:19 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 21 13:58:19 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 21 13:58:19 compute-0 systemd[1]: sshd.service: Consumed 2.578s CPU time, read 32.0K from disk, written 0B to disk.
Jan 21 13:58:19 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 21 13:58:19 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 21 13:58:19 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 13:58:19 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 13:58:19 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 21 13:58:19 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 21 13:58:19 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 21 13:58:19 compute-0 sshd[181805]: Server listening on 0.0.0.0 port 22.
Jan 21 13:58:19 compute-0 sshd[181805]: Server listening on :: port 22.
Jan 21 13:58:19 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 21 13:58:19 compute-0 ceph-mon[75031]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:20 compute-0 ceph-mon[75031]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 13:58:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 13:58:21 compute-0 systemd[1]: Reloading.
Jan 21 13:58:21 compute-0 systemd-rc-local-generator[182057]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:21 compute-0 systemd-sysv-generator[182061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 13:58:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:25 compute-0 ceph-mon[75031]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:26 compute-0 ceph-mon[75031]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:26 compute-0 ceph-mon[75031]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:27 compute-0 sudo[162180]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:28 compute-0 sudo[188609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjvceiohqooudgoyqdkehptxuusozlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003907.5311866-331-220374391163695/AnsiballZ_systemd.py'
Jan 21 13:58:28 compute-0 sudo[188609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:28 compute-0 python3.9[188632]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:58:28 compute-0 systemd[1]: Reloading.
Jan 21 13:58:28 compute-0 systemd-rc-local-generator[189093]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:28 compute-0 systemd-sysv-generator[189096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:28 compute-0 sudo[188609]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:28 compute-0 ceph-mon[75031]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:29 compute-0 sudo[189927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbzsvdxtnjmfamhyzscxsjropagganbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003909.0894337-331-51081176133314/AnsiballZ_systemd.py'
Jan 21 13:58:29 compute-0 sudo[189927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:29 compute-0 python3.9[189945]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:58:29 compute-0 systemd[1]: Reloading.
Jan 21 13:58:29 compute-0 systemd-rc-local-generator[190374]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:29 compute-0 systemd-sysv-generator[190378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:30 compute-0 sudo[189927]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 13:58:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 13:58:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.801s CPU time.
Jan 21 13:58:30 compute-0 systemd[1]: run-r6e0f824e1e6d45e7b1896cba5e1c78fd.service: Deactivated successfully.
Jan 21 13:58:30 compute-0 sudo[190998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bghhvkrwbzgkjpgmdatkcimfmrjwqdcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003910.2846575-331-19227718699800/AnsiballZ_systemd.py'
Jan 21 13:58:30 compute-0 sudo[190998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:30 compute-0 podman[190950]: 2026-01-21 13:58:30.587269874 +0000 UTC m=+0.109036230 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 13:58:30 compute-0 python3.9[191004]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:58:30 compute-0 systemd[1]: Reloading.
Jan 21 13:58:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:30 compute-0 systemd-rc-local-generator[191039]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:30 compute-0 systemd-sysv-generator[191042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:30 compute-0 ceph-mon[75031]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:31 compute-0 sudo[190998]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:31 compute-0 sudo[191196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvjhtqokncxarpffpmgewlleogkjxtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003911.3679764-331-248642059488629/AnsiballZ_systemd.py'
Jan 21 13:58:31 compute-0 sudo[191196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:31 compute-0 python3.9[191198]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:58:31 compute-0 systemd[1]: Reloading.
Jan 21 13:58:32 compute-0 systemd-rc-local-generator[191229]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:32 compute-0 systemd-sysv-generator[191233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:32 compute-0 sudo[191196]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:32 compute-0 sudo[191387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttomcbfsfpcjmmuxiiqccggtwqubobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003912.4225073-360-31992384996780/AnsiballZ_systemd.py'
Jan 21 13:58:32 compute-0 sudo[191387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:33 compute-0 ceph-mon[75031]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:33 compute-0 python3.9[191389]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:33 compute-0 systemd[1]: Reloading.
Jan 21 13:58:33 compute-0 systemd-rc-local-generator[191420]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:33 compute-0 systemd-sysv-generator[191423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:33 compute-0 sudo[191387]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:33 compute-0 sudo[191577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grflpeqfmdkcvvyaitbnlqslygwntoct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003913.5447938-360-218978929741192/AnsiballZ_systemd.py'
Jan 21 13:58:33 compute-0 sudo[191577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:58:33.887 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 13:58:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:58:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 13:58:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:58:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 13:58:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:34 compute-0 python3.9[191579]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:34 compute-0 systemd[1]: Reloading.
Jan 21 13:58:34 compute-0 systemd-rc-local-generator[191611]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:34 compute-0 systemd-sysv-generator[191614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:34 compute-0 sudo[191577]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:35 compute-0 ceph-mon[75031]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:35 compute-0 sudo[191768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axdpmowcjaxgdtahmhqbkdmaufhujliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003914.7002833-360-226063436457613/AnsiballZ_systemd.py'
Jan 21 13:58:35 compute-0 sudo[191768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:35 compute-0 python3.9[191770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:35 compute-0 systemd[1]: Reloading.
Jan 21 13:58:35 compute-0 systemd-sysv-generator[191805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:35 compute-0 systemd-rc-local-generator[191800]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:35 compute-0 sudo[191768]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:36 compute-0 sudo[191958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvorofrnbwzzmyljzsksrnjbmdstyemz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003915.8033473-360-246877603836710/AnsiballZ_systemd.py'
Jan 21 13:58:36 compute-0 sudo[191958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:36 compute-0 python3.9[191960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:36 compute-0 sudo[191958]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:36 compute-0 podman[191962]: 2026-01-21 13:58:36.551180223 +0000 UTC m=+0.105530002 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 13:58:36 compute-0 sudo[192133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtyolsaohpwdqbvcioejtepymvgdoqzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003916.6449277-360-134630310629361/AnsiballZ_systemd.py'
Jan 21 13:58:36 compute-0 sudo[192133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:37 compute-0 python3.9[192135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:37 compute-0 systemd[1]: Reloading.
Jan 21 13:58:37 compute-0 ceph-mon[75031]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:37 compute-0 systemd-sysv-generator[192170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:37 compute-0 systemd-rc-local-generator[192166]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:37 compute-0 sudo[192133]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:38 compute-0 sudo[192323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsehhwrytjsrpspoyawzexunposgjoxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003918.2422004-396-128846329498230/AnsiballZ_systemd.py'
Jan 21 13:58:38 compute-0 sudo[192323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:38 compute-0 python3.9[192325]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 21 13:58:38 compute-0 systemd[1]: Reloading.
Jan 21 13:58:38 compute-0 ceph-mon[75031]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:39 compute-0 systemd-sysv-generator[192357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:58:39 compute-0 systemd-rc-local-generator[192352]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:58:39 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 21 13:58:39 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 21 13:58:39 compute-0 sudo[192323]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:58:39
Jan 21 13:58:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:58:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:58:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'vms', 'default.rgw.control', '.rgw.root']
Jan 21 13:58:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:58:39 compute-0 sudo[192515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-padpzsxhxgfcmpgbalalleeiosdbjtym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003919.5212278-404-163591016516274/AnsiballZ_systemd.py'
Jan 21 13:58:39 compute-0 sudo[192515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:40 compute-0 python3.9[192517]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:40 compute-0 sudo[192515]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:40 compute-0 sudo[192670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmsazvppojsuokyvpcrqtzgpvyvetmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003920.3388803-404-255220818236290/AnsiballZ_systemd.py'
Jan 21 13:58:40 compute-0 sudo[192670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:40 compute-0 python3.9[192672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:58:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:58:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:58:40 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:58:41 compute-0 sudo[192670]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:58:41 compute-0 sudo[192825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmbmiggtfeosnwsesffcrkvwtgijxnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003921.2353625-404-83824922247738/AnsiballZ_systemd.py'
Jan 21 13:58:41 compute-0 sudo[192825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:42 compute-0 python3.9[192827]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:42 compute-0 ceph-mon[75031]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:42 compute-0 sudo[192825]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:43 compute-0 sudo[192980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekapnefpiqcqivuypdgylsbkuqknljqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003922.9377825-404-252912567887527/AnsiballZ_systemd.py'
Jan 21 13:58:43 compute-0 sudo[192980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:43 compute-0 python3.9[192982]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:43 compute-0 sudo[192980]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:44 compute-0 sudo[193135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkwishbmjbcfkxhnccqtdxwamgcayuiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003923.7470238-404-275370435633964/AnsiballZ_systemd.py'
Jan 21 13:58:44 compute-0 sudo[193135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:44 compute-0 ceph-mon[75031]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:44 compute-0 python3.9[193137]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:44 compute-0 sudo[193135]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:44 compute-0 sudo[193290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrwxsxenpxvrwzdsctdvdysgfkubmbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003924.6655593-404-199195682108651/AnsiballZ_systemd.py'
Jan 21 13:58:44 compute-0 sudo[193290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:45 compute-0 ceph-mon[75031]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:45 compute-0 python3.9[193292]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:45 compute-0 sudo[193290]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:45 compute-0 sudo[193445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukacgduazyafmpicbeulmyzogrrfnhwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003925.4873357-404-92289152899241/AnsiballZ_systemd.py'
Jan 21 13:58:45 compute-0 sudo[193445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:46 compute-0 python3.9[193447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:46 compute-0 sudo[193445]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:46 compute-0 sudo[193600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffiykogqfzyvquoepamgwcolwhjlodia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003926.3164835-404-165254109591409/AnsiballZ_systemd.py'
Jan 21 13:58:46 compute-0 sudo[193600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:46 compute-0 python3.9[193602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:46 compute-0 sudo[193600]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:47 compute-0 ceph-mon[75031]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:47 compute-0 sudo[193755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntymrprnjtedogfuvrksjboiglphuml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003927.0535433-404-74520035026738/AnsiballZ_systemd.py'
Jan 21 13:58:47 compute-0 sudo[193755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:47 compute-0 python3.9[193757]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:47 compute-0 sudo[193755]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:48 compute-0 sudo[193910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxazvbykeppdrpbhtdrwmwzetopceja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003927.8305123-404-79013415412912/AnsiballZ_systemd.py'
Jan 21 13:58:48 compute-0 sudo[193910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:48 compute-0 python3.9[193912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:48 compute-0 sudo[193910]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:49 compute-0 sudo[194065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chclprcyybitvcqgwwreicornvmtwatz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003928.7230256-404-252810680350576/AnsiballZ_systemd.py'
Jan 21 13:58:49 compute-0 sudo[194065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:49 compute-0 ceph-mon[75031]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:49 compute-0 python3.9[194067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:49 compute-0 sudo[194065]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:49 compute-0 sudo[194220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrntivpwccftqisklwacurjkammkxzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003929.5271378-404-170910618105096/AnsiballZ_systemd.py'
Jan 21 13:58:49 compute-0 sudo[194220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:50 compute-0 python3.9[194222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:50 compute-0 sudo[194220]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:58:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:58:50 compute-0 sudo[194375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trliyszfimlpwkzmakvsaibbgrlgooox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003930.4456115-404-93022218058829/AnsiballZ_systemd.py'
Jan 21 13:58:50 compute-0 sudo[194375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:50 compute-0 sudo[194378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:58:50 compute-0 sudo[194378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:50 compute-0 sudo[194378]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:50 compute-0 sudo[194403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:58:50 compute-0 sudo[194403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:51 compute-0 python3.9[194377]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:51 compute-0 sudo[194375]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:51 compute-0 sudo[194600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxyrmvcjshmopvofcuzyzukqidvjykr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003931.3370717-404-104372082408008/AnsiballZ_systemd.py'
Jan 21 13:58:51 compute-0 sudo[194600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:51 compute-0 sudo[194403]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:51 compute-0 python3.9[194602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 21 13:58:52 compute-0 sudo[194600]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:58:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:58:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:58:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:58:52 compute-0 ceph-mon[75031]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:58:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:58:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:58:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:58:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:58:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:58:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:58:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:58:52 compute-0 sudo[194642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:58:52 compute-0 sudo[194642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:52 compute-0 sudo[194642]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:52 compute-0 sudo[194670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 13:58:52 compute-0 sudo[194670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:52 compute-0 sudo[194831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkhgeninkimyplxavjengokcievxcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003932.4470518-506-275390313782456/AnsiballZ_file.py'
Jan 21 13:58:52 compute-0 sudo[194831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:52 compute-0 podman[194829]: 2026-01-21 13:58:52.723645147 +0000 UTC m=+0.031600469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:58:52 compute-0 python3.9[194838]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:58:52 compute-0 sudo[194831]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:52 compute-0 podman[194829]: 2026-01-21 13:58:52.970788656 +0000 UTC m=+0.278743948 container create caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:58:53 compute-0 systemd[1]: Started libpod-conmon-caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292.scope.
Jan 21 13:58:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:58:53 compute-0 sudo[195001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inalfuvruigkwomadlholhhnfntxrbgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003933.0727906-506-250352613726115/AnsiballZ_file.py'
Jan 21 13:58:53 compute-0 sudo[195001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:53 compute-0 python3.9[195003]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:58:53 compute-0 sudo[195001]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:53 compute-0 ceph-mon[75031]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:58:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:58:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:58:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:58:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:58:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:58:53 compute-0 podman[194829]: 2026-01-21 13:58:53.75133611 +0000 UTC m=+1.059291502 container init caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 13:58:53 compute-0 podman[194829]: 2026-01-21 13:58:53.766765384 +0000 UTC m=+1.074720676 container start caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:58:53 compute-0 relaxed_morse[194960]: 167 167
Jan 21 13:58:53 compute-0 systemd[1]: libpod-caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292.scope: Deactivated successfully.
Jan 21 13:58:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:53 compute-0 podman[194829]: 2026-01-21 13:58:53.945165504 +0000 UTC m=+1.253120806 container attach caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 13:58:53 compute-0 podman[194829]: 2026-01-21 13:58:53.945635676 +0000 UTC m=+1.253590978 container died caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:58:54 compute-0 sudo[195167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfsqbmxpuqwfxxeyjylebacdbxdaqhez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003933.7631638-506-106669935525852/AnsiballZ_file.py'
Jan 21 13:58:54 compute-0 sudo[195167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:54 compute-0 python3.9[195169]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:58:54 compute-0 sudo[195167]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9094fba0c0fe3fac2438c586e74dc969eb09d1251db311695b788167f4ab754f-merged.mount: Deactivated successfully.
Jan 21 13:58:54 compute-0 podman[194829]: 2026-01-21 13:58:54.475471436 +0000 UTC m=+1.783426728 container remove caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_morse, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 13:58:54 compute-0 systemd[1]: libpod-conmon-caffe1fc66072dfa59e591c949ad0715446366fc19cb0e7a97d18b629091a292.scope: Deactivated successfully.
Jan 21 13:58:55 compute-0 sudo[195338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcrewhvfjpbawwjovyponvmgluqpyqlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003934.484372-506-246292412365308/AnsiballZ_file.py'
Jan 21 13:58:55 compute-0 sudo[195338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:55 compute-0 ceph-mon[75031]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.122129641 +0000 UTC m=+0.515029122 container create d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 13:58:55 compute-0 systemd[1]: Started libpod-conmon-d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1.scope.
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.098344994 +0000 UTC m=+0.491244595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:58:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.216160573 +0000 UTC m=+0.609060084 container init d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.225145672 +0000 UTC m=+0.618045163 container start d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.241126219 +0000 UTC m=+0.634025760 container attach d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:58:55 compute-0 python3.9[195340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:58:55 compute-0 sudo[195338]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:55 compute-0 sudo[195514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubpssjorvxcvtspucmiitacuetjdfyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003935.4632335-506-97867983095105/AnsiballZ_file.py'
Jan 21 13:58:55 compute-0 sudo[195514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:55 compute-0 serene_sutherland[195346]: --> passed data devices: 0 physical, 3 LVM
Jan 21 13:58:55 compute-0 serene_sutherland[195346]: --> All data devices are unavailable
Jan 21 13:58:55 compute-0 systemd[1]: libpod-d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1.scope: Deactivated successfully.
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.717057921 +0000 UTC m=+1.109957422 container died d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0917804dbab626601291216e4728ea6aedf6fd12274ff01de6fa4e0301aeae2f-merged.mount: Deactivated successfully.
Jan 21 13:58:55 compute-0 podman[195276]: 2026-01-21 13:58:55.772152637 +0000 UTC m=+1.165052118 container remove d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:58:55 compute-0 systemd[1]: libpod-conmon-d4bea3b64cf7bfb22c1a121302c7cce21630f3fc4bcf935c25dd044332cf8fc1.scope: Deactivated successfully.
Jan 21 13:58:55 compute-0 sudo[194670]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:55 compute-0 sudo[195529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:58:55 compute-0 sudo[195529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:55 compute-0 sudo[195529]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:55 compute-0 sudo[195554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 13:58:55 compute-0 sudo[195554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:55 compute-0 python3.9[195517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:58:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:55 compute-0 sudo[195514]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.186462773 +0000 UTC m=+0.043114867 container create b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 13:58:56 compute-0 systemd[1]: Started libpod-conmon-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope.
Jan 21 13:58:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.163276421 +0000 UTC m=+0.019928535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.312074262 +0000 UTC m=+0.168726376 container init b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.320147008 +0000 UTC m=+0.176799102 container start b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:58:56 compute-0 sudo[195759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnkfegyaoakevyzzkvnhuoufvjdlqryl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003936.060882-506-121359414420857/AnsiballZ_file.py'
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.3235199 +0000 UTC m=+0.180172034 container attach b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 13:58:56 compute-0 systemd[1]: libpod-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope: Deactivated successfully.
Jan 21 13:58:56 compute-0 gracious_proskuriakova[195714]: 167 167
Jan 21 13:58:56 compute-0 conmon[195714]: conmon b9e406e34e94434a98f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope/container/memory.events
Jan 21 13:58:56 compute-0 sudo[195759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.326371709 +0000 UTC m=+0.183023813 container died b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5387a821d54811110a18fb71888313887b2f825a3b99e4a786577cd54b91669e-merged.mount: Deactivated successfully.
Jan 21 13:58:56 compute-0 podman[195670]: 2026-01-21 13:58:56.449778204 +0000 UTC m=+0.306430338 container remove b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 13:58:56 compute-0 systemd[1]: libpod-conmon-b9e406e34e94434a98f8d2d6a7003fb07a03e3d84e6bdb4955137d8ec7419224.scope: Deactivated successfully.
Jan 21 13:58:56 compute-0 python3.9[195764]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 13:58:56 compute-0 sudo[195759]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:56 compute-0 podman[195788]: 2026-01-21 13:58:56.6333529 +0000 UTC m=+0.040157805 container create d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:58:56 compute-0 systemd[1]: Started libpod-conmon-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope.
Jan 21 13:58:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:56 compute-0 podman[195788]: 2026-01-21 13:58:56.616015519 +0000 UTC m=+0.022820444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:58:56 compute-0 podman[195788]: 2026-01-21 13:58:56.756045768 +0000 UTC m=+0.162850703 container init d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 13:58:56 compute-0 podman[195788]: 2026-01-21 13:58:56.766342927 +0000 UTC m=+0.173147842 container start d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 13:58:56 compute-0 podman[195788]: 2026-01-21 13:58:56.773809609 +0000 UTC m=+0.180614514 container attach d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:58:57 compute-0 sad_tesla[195825]: {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:     "0": [
Jan 21 13:58:57 compute-0 sad_tesla[195825]:         {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "devices": [
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "/dev/loop3"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             ],
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_name": "ceph_lv0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_size": "21470642176",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "name": "ceph_lv0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "tags": {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cluster_name": "ceph",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.crush_device_class": "",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.encrypted": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.objectstore": "bluestore",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osd_id": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.type": "block",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.vdo": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.with_tpm": "0"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             },
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "type": "block",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "vg_name": "ceph_vg0"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:         }
Jan 21 13:58:57 compute-0 sad_tesla[195825]:     ],
Jan 21 13:58:57 compute-0 sad_tesla[195825]:     "1": [
Jan 21 13:58:57 compute-0 sad_tesla[195825]:         {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "devices": [
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "/dev/loop4"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             ],
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_name": "ceph_lv1",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_size": "21470642176",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "name": "ceph_lv1",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "tags": {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cluster_name": "ceph",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.crush_device_class": "",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.encrypted": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.objectstore": "bluestore",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osd_id": "1",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.type": "block",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.vdo": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.with_tpm": "0"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             },
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "type": "block",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "vg_name": "ceph_vg1"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:         }
Jan 21 13:58:57 compute-0 sad_tesla[195825]:     ],
Jan 21 13:58:57 compute-0 sad_tesla[195825]:     "2": [
Jan 21 13:58:57 compute-0 sad_tesla[195825]:         {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "devices": [
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "/dev/loop5"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             ],
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_name": "ceph_lv2",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_size": "21470642176",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "name": "ceph_lv2",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "tags": {
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.cluster_name": "ceph",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.crush_device_class": "",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.encrypted": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.objectstore": "bluestore",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osd_id": "2",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.type": "block",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.vdo": "0",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:                 "ceph.with_tpm": "0"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             },
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "type": "block",
Jan 21 13:58:57 compute-0 sad_tesla[195825]:             "vg_name": "ceph_vg2"
Jan 21 13:58:57 compute-0 sad_tesla[195825]:         }
Jan 21 13:58:57 compute-0 sad_tesla[195825]:     ]
Jan 21 13:58:57 compute-0 sad_tesla[195825]: }
Jan 21 13:58:57 compute-0 systemd[1]: libpod-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope: Deactivated successfully.
Jan 21 13:58:57 compute-0 conmon[195825]: conmon d66f3a509c0533255c6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope/container/memory.events
Jan 21 13:58:57 compute-0 podman[195788]: 2026-01-21 13:58:57.095736113 +0000 UTC m=+0.502541018 container died d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 13:58:57 compute-0 ceph-mon[75031]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-23fdab4d26a7ea7f1d7bd552142995833cc25a14745118d787d18f2305270d81-merged.mount: Deactivated successfully.
Jan 21 13:58:57 compute-0 podman[195788]: 2026-01-21 13:58:57.143764768 +0000 UTC m=+0.550569673 container remove d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 13:58:57 compute-0 systemd[1]: libpod-conmon-d66f3a509c0533255c6a1dcb6ed14b4c5648948adff516e6d8b56519addc9b27.scope: Deactivated successfully.
Jan 21 13:58:57 compute-0 sudo[195554]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:57 compute-0 sudo[195972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:58:57 compute-0 sudo[195972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:57 compute-0 sudo[195972]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:57 compute-0 python3.9[195959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 13:58:57 compute-0 sudo[195997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 13:58:57 compute-0 sudo[195997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.584891245 +0000 UTC m=+0.041768255 container create bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 13:58:57 compute-0 systemd[1]: Started libpod-conmon-bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1.scope.
Jan 21 13:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.566794646 +0000 UTC m=+0.023671706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.673839904 +0000 UTC m=+0.130716944 container init bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.681303095 +0000 UTC m=+0.138180095 container start bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.685593539 +0000 UTC m=+0.142470579 container attach bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 13:58:57 compute-0 silly_montalcini[196126]: 167 167
Jan 21 13:58:57 compute-0 systemd[1]: libpod-bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1.scope: Deactivated successfully.
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.690952819 +0000 UTC m=+0.147829859 container died bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 13:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8de7d01bb20ae9f13ec5bcf3bcb1d63d1fc17df48f4852e9d9268b47c92e4c4-merged.mount: Deactivated successfully.
Jan 21 13:58:57 compute-0 podman[196082]: 2026-01-21 13:58:57.730620932 +0000 UTC m=+0.187497942 container remove bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 13:58:57 compute-0 systemd[1]: libpod-conmon-bcc54a4268af20678a17331acd0266d30cda10e12d3477f78ffa9f5aedba44d1.scope: Deactivated successfully.
Jan 21 13:58:57 compute-0 podman[196194]: 2026-01-21 13:58:57.920752117 +0000 UTC m=+0.041990530 container create 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:58:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:57 compute-0 systemd[1]: Started libpod-conmon-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope.
Jan 21 13:58:57 compute-0 sudo[196240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppujiiunqatnsykstlvnktawhbotrhjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003937.5232837-557-10781633875981/AnsiballZ_stat.py'
Jan 21 13:58:57 compute-0 sudo[196240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 13:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 13:58:57 compute-0 podman[196194]: 2026-01-21 13:58:57.904330668 +0000 UTC m=+0.025569061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 13:58:58 compute-0 podman[196194]: 2026-01-21 13:58:58.007963843 +0000 UTC m=+0.129202256 container init 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 13:58:58 compute-0 podman[196194]: 2026-01-21 13:58:58.018051548 +0000 UTC m=+0.139289941 container start 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 13:58:58 compute-0 podman[196194]: 2026-01-21 13:58:58.022619549 +0000 UTC m=+0.143857972 container attach 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:58:58 compute-0 python3.9[196244]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:58:58 compute-0 sudo[196240]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:58 compute-0 lvm[196426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 13:58:58 compute-0 lvm[196426]: VG ceph_vg0 finished
Jan 21 13:58:58 compute-0 lvm[196434]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 13:58:58 compute-0 lvm[196434]: VG ceph_vg1 finished
Jan 21 13:58:58 compute-0 lvm[196447]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 13:58:58 compute-0 lvm[196447]: VG ceph_vg2 finished
Jan 21 13:58:58 compute-0 sudo[196445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjtwbxyciznkglnvnkmijzvhqoldkrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003937.5232837-557-10781633875981/AnsiballZ_copy.py'
Jan 21 13:58:58 compute-0 sudo[196445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:58 compute-0 dreamy_cerf[196242]: {}
Jan 21 13:58:58 compute-0 systemd[1]: libpod-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope: Deactivated successfully.
Jan 21 13:58:58 compute-0 systemd[1]: libpod-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope: Consumed 1.265s CPU time.
Jan 21 13:58:58 compute-0 podman[196194]: 2026-01-21 13:58:58.864113593 +0000 UTC m=+0.985352006 container died 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 13:58:58 compute-0 python3.9[196449]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003937.5232837-557-10781633875981/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:58:58 compute-0 sudo[196445]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fdb6ba8b11817416889d9db465a904e2a2c84481d4b5b3bdb476f4e40c33f8e-merged.mount: Deactivated successfully.
Jan 21 13:58:59 compute-0 podman[196194]: 2026-01-21 13:58:59.069073038 +0000 UTC m=+1.190311431 container remove 4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 13:58:59 compute-0 systemd[1]: libpod-conmon-4a3b6e141c4f5ada456299fa68bc3f636ec7d520cb3c049deb2753c444209c44.scope: Deactivated successfully.
Jan 21 13:58:59 compute-0 sudo[195997]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 13:58:59 compute-0 ceph-mon[75031]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:58:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:58:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 13:58:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:58:59 compute-0 sudo[196539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 13:58:59 compute-0 sudo[196539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:58:59 compute-0 sudo[196539]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:59 compute-0 sudo[196637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spflffkimcvrjhlopstglqnazobczfjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003939.0820577-557-264960872901528/AnsiballZ_stat.py'
Jan 21 13:58:59 compute-0 sudo[196637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:58:59 compute-0 python3.9[196639]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:58:59 compute-0 sudo[196637]: pam_unix(sudo:session): session closed for user root
Jan 21 13:58:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:00 compute-0 sudo[196762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuefhrxporuqvhrnjdqswxvhxmsvwwfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003939.0820577-557-264960872901528/AnsiballZ_copy.py'
Jan 21 13:59:00 compute-0 sudo[196762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:00 compute-0 python3.9[196764]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003939.0820577-557-264960872901528/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:00 compute-0 sudo[196762]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:00 compute-0 sudo[196914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovyxiausuwrpdcvumqfpwdhmrrhtqfgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003940.374972-557-165156706703402/AnsiballZ_stat.py'
Jan 21 13:59:00 compute-0 sudo[196914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:00 compute-0 podman[196916]: 2026-01-21 13:59:00.803914984 +0000 UTC m=+0.122326159 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 13:59:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:02 compute-0 python3.9[196917]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:02 compute-0 sudo[196914]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:59:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:59:02 compute-0 sudo[197065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhcruzckytkthiwcsbbablalyytmfkhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003940.374972-557-165156706703402/AnsiballZ_copy.py'
Jan 21 13:59:02 compute-0 sudo[197065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:03 compute-0 python3.9[197067]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003940.374972-557-165156706703402/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:03 compute-0 sudo[197065]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:03 compute-0 sudo[197217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounsopilkgzcwwajeoispqjufkvpojci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003943.3088322-557-112566675152275/AnsiballZ_stat.py'
Jan 21 13:59:03 compute-0 sudo[197217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:03 compute-0 python3.9[197219]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:03 compute-0 ceph-mon[75031]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:03 compute-0 ceph-mon[75031]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:03 compute-0 sudo[197217]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:04 compute-0 sudo[197342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrglyphjrvthahdnuzzofrnguuardrad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003943.3088322-557-112566675152275/AnsiballZ_copy.py'
Jan 21 13:59:04 compute-0 sudo[197342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:04 compute-0 python3.9[197344]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003943.3088322-557-112566675152275/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:04 compute-0 sudo[197342]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:04 compute-0 sudo[197494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chlrktwnmdxqdxdcjydjyrpwilirqkbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003944.438018-557-122441901921560/AnsiballZ_stat.py'
Jan 21 13:59:04 compute-0 sudo[197494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:04 compute-0 python3.9[197496]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:04 compute-0 sudo[197494]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:04 compute-0 ceph-mon[75031]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:05 compute-0 sudo[197619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlutxvwdgbpvvtzrusuhudmoxjuogxge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003944.438018-557-122441901921560/AnsiballZ_copy.py'
Jan 21 13:59:05 compute-0 sudo[197619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:05 compute-0 python3.9[197621]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003944.438018-557-122441901921560/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:05 compute-0 sudo[197619]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:05 compute-0 sudo[197771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciapyljxjwwlpnnwxcayvvyowkhijpwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003945.6471157-557-56167089212790/AnsiballZ_stat.py'
Jan 21 13:59:05 compute-0 sudo[197771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:06 compute-0 python3.9[197773]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:06 compute-0 sudo[197771]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:06 compute-0 sudo[197896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivqfhcpyolnpoqwfgbekuqmukbmugqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003945.6471157-557-56167089212790/AnsiballZ_copy.py'
Jan 21 13:59:06 compute-0 sudo[197896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:06 compute-0 python3.9[197898]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003945.6471157-557-56167089212790/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:06 compute-0 sudo[197896]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:06 compute-0 podman[197899]: 2026-01-21 13:59:06.816449095 +0000 UTC m=+0.047722199 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 21 13:59:07 compute-0 sudo[198067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtombzgbsgddoawdjtvtbqllldtcqyot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003946.9041576-557-22620768676200/AnsiballZ_stat.py'
Jan 21 13:59:07 compute-0 sudo[198067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:07 compute-0 ceph-mon[75031]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:07 compute-0 python3.9[198069]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:07 compute-0 sudo[198067]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:07 compute-0 sudo[198190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiynsajajxexhzefgpjgotkcjtghjzqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003946.9041576-557-22620768676200/AnsiballZ_copy.py'
Jan 21 13:59:07 compute-0 sudo[198190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:08 compute-0 python3.9[198192]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003946.9041576-557-22620768676200/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:08 compute-0 sudo[198190]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:08 compute-0 sudo[198342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-furhdggfcikezimxmxbpjhysvegqbkep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003948.170144-557-142586222554499/AnsiballZ_stat.py'
Jan 21 13:59:08 compute-0 sudo[198342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:08 compute-0 python3.9[198344]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:08 compute-0 sudo[198342]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:09 compute-0 sudo[198467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxgadevbsymduktmdhtbwfgvhfwphwfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003948.170144-557-142586222554499/AnsiballZ_copy.py'
Jan 21 13:59:09 compute-0 sudo[198467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:09 compute-0 python3.9[198469]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769003948.170144-557-142586222554499/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:09 compute-0 sudo[198467]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:09 compute-0 ceph-mon[75031]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:09 compute-0 sudo[198619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fullbkmfrhfqyzvgaxchxqiyshovzmjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003949.4412289-670-110282608654907/AnsiballZ_command.py'
Jan 21 13:59:09 compute-0 sudo[198619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:09 compute-0 python3.9[198621]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 21 13:59:09 compute-0 sudo[198619]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:10 compute-0 sudo[198772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlaesbkqdpqdzyujkximqyktyhmgpgwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003950.164497-679-50538447474574/AnsiballZ_file.py'
Jan 21 13:59:10 compute-0 sudo[198772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:10 compute-0 python3.9[198774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:10 compute-0 sudo[198772]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:59:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:59:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:59:10 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:59:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:59:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:59:11 compute-0 sudo[198924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfnsytzqdcierwunvisqlkgsmrjcdsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003950.8025417-679-33468978609879/AnsiballZ_file.py'
Jan 21 13:59:11 compute-0 sudo[198924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:11 compute-0 python3.9[198926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:11 compute-0 sudo[198924]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:11 compute-0 sudo[199076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukruheqtzqyhvvnhcejdnayvhabdfuqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003951.411816-679-171279364378435/AnsiballZ_file.py'
Jan 21 13:59:11 compute-0 sudo[199076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:11 compute-0 ceph-mon[75031]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:11 compute-0 python3.9[199078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:11 compute-0 sudo[199076]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:12 compute-0 sudo[199228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpjagedfwovnxaeuzizzmvygsncmljxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003951.9862525-679-232893287853315/AnsiballZ_file.py'
Jan 21 13:59:12 compute-0 sudo[199228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:12 compute-0 python3.9[199230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:12 compute-0 sudo[199228]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:12 compute-0 sudo[199380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouetnevbdkdafnojxmsfncwxtovvmaiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003952.6403625-679-26824855960963/AnsiballZ_file.py'
Jan 21 13:59:12 compute-0 sudo[199380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:12 compute-0 ceph-mon[75031]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:13 compute-0 python3.9[199382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:13 compute-0 sudo[199380]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:13 compute-0 sudo[199532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypsnavnydjqbhymdjypepudtnkqmlgli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003953.1927247-679-167422791591733/AnsiballZ_file.py'
Jan 21 13:59:13 compute-0 sudo[199532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:13 compute-0 python3.9[199534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:13 compute-0 sudo[199532]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:14 compute-0 sudo[199684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoragipsmfbgqzsqqskgxewskabgdzce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003953.8217843-679-92182912418008/AnsiballZ_file.py'
Jan 21 13:59:14 compute-0 sudo[199684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:14 compute-0 python3.9[199686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:14 compute-0 sudo[199684]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:14 compute-0 sudo[199836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqrcdelqrbbdpueiaeklkmlveywdlrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003954.503339-679-10844239139825/AnsiballZ_file.py'
Jan 21 13:59:14 compute-0 sudo[199836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:15 compute-0 python3.9[199838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:15 compute-0 sudo[199836]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:15 compute-0 ceph-mon[75031]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:15 compute-0 sudo[199988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjintmdyffefalxhxieyxnytnilybur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003955.1714346-679-121637300989916/AnsiballZ_file.py'
Jan 21 13:59:15 compute-0 sudo[199988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:15 compute-0 python3.9[199990]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:15 compute-0 sudo[199988]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:16 compute-0 sudo[200140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phxdrkkpilkdtlxztfcnoqrchbwbxpzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003955.8022547-679-267553509555686/AnsiballZ_file.py'
Jan 21 13:59:16 compute-0 sudo[200140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:16 compute-0 python3.9[200142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:16 compute-0 sudo[200140]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:16 compute-0 sudo[200292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbquglxztketkoekwsagfrqyoueucyml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003956.4523902-679-70925440308853/AnsiballZ_file.py'
Jan 21 13:59:16 compute-0 sudo[200292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:16 compute-0 python3.9[200294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:16 compute-0 sudo[200292]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:17 compute-0 ceph-mon[75031]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:17 compute-0 sudo[200444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xplpxzhryjpyzmjqqrxyejrilfycyqcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003957.111686-679-220946563748101/AnsiballZ_file.py'
Jan 21 13:59:17 compute-0 sudo[200444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:17 compute-0 python3.9[200446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:17 compute-0 sudo[200444]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:18 compute-0 sudo[200596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itdgxctfskncdrjqmvgkkealutoypfar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003957.7972522-679-47394666251917/AnsiballZ_file.py'
Jan 21 13:59:18 compute-0 sudo[200596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:18 compute-0 python3.9[200598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:18 compute-0 sudo[200596]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:18 compute-0 sudo[200748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwspgxqluvhuxiuchkfzljogeazhbmba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003958.4844275-679-209512763605667/AnsiballZ_file.py'
Jan 21 13:59:18 compute-0 sudo[200748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:19 compute-0 python3.9[200750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:19 compute-0 sudo[200748]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:19 compute-0 sudo[200900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsczwefoqmjzanxrvbxcwnabztulzeyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003959.2498343-778-275291954432556/AnsiballZ_stat.py'
Jan 21 13:59:19 compute-0 sudo[200900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:19 compute-0 python3.9[200902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:19 compute-0 sudo[200900]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:19 compute-0 ceph-mon[75031]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:20 compute-0 sudo[201023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcowoyejeuxoscdhjyjcabqyjhvqvmev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003959.2498343-778-275291954432556/AnsiballZ_copy.py'
Jan 21 13:59:20 compute-0 sudo[201023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:20 compute-0 python3.9[201025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003959.2498343-778-275291954432556/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:20 compute-0 sudo[201023]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:20 compute-0 sudo[201175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqxbfoqrviknskdwtzodghvlipdnsipk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003960.453589-778-73771734669294/AnsiballZ_stat.py'
Jan 21 13:59:20 compute-0 sudo[201175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:20 compute-0 ceph-mon[75031]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:20 compute-0 python3.9[201177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:20 compute-0 sudo[201175]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:21 compute-0 sudo[201298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgmllgqtqdjsptvxlmhfhjwohuzwgooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003960.453589-778-73771734669294/AnsiballZ_copy.py'
Jan 21 13:59:21 compute-0 sudo[201298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:21 compute-0 python3.9[201300]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003960.453589-778-73771734669294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:21 compute-0 sudo[201298]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:21 compute-0 sudo[201450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxcfxzvcigcaotiutmoiqmqiovemvep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003961.6753933-778-83109666681728/AnsiballZ_stat.py'
Jan 21 13:59:21 compute-0 sudo[201450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:22 compute-0 python3.9[201452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:22 compute-0 sudo[201450]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:22 compute-0 sudo[201573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jealxxpimeoeifzjosftnymfmvnszjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003961.6753933-778-83109666681728/AnsiballZ_copy.py'
Jan 21 13:59:22 compute-0 sudo[201573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:22 compute-0 python3.9[201575]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003961.6753933-778-83109666681728/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:22 compute-0 sudo[201573]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:23 compute-0 sudo[201725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmdordiviyuwclwpcxaiextezyxnuzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003962.8937893-778-64823596107619/AnsiballZ_stat.py'
Jan 21 13:59:23 compute-0 sudo[201725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:23 compute-0 ceph-mon[75031]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:23 compute-0 python3.9[201727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:23 compute-0 sudo[201725]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:23 compute-0 sudo[201848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgkwvyixtmwdllyxqzdvktrvqlhinaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003962.8937893-778-64823596107619/AnsiballZ_copy.py'
Jan 21 13:59:23 compute-0 sudo[201848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:23 compute-0 python3.9[201850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003962.8937893-778-64823596107619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:23 compute-0 sudo[201848]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:24 compute-0 sudo[202000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdhrxqrbkppfkfouapermyezfxoryhij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003963.9531476-778-56925722585062/AnsiballZ_stat.py'
Jan 21 13:59:24 compute-0 sudo[202000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:24 compute-0 python3.9[202002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:24 compute-0 sudo[202000]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:24 compute-0 sudo[202123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reutompnhslznnvwmhjcvfkrchwwjban ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003963.9531476-778-56925722585062/AnsiballZ_copy.py'
Jan 21 13:59:24 compute-0 sudo[202123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:24 compute-0 python3.9[202125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003963.9531476-778-56925722585062/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:24 compute-0 sudo[202123]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:25 compute-0 ceph-mon[75031]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:25 compute-0 sudo[202275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwpeommzpwcnkxuufxvejcmisozkvloj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003965.0572498-778-5756098653459/AnsiballZ_stat.py'
Jan 21 13:59:25 compute-0 sudo[202275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:25 compute-0 python3.9[202277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:25 compute-0 sudo[202275]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:25 compute-0 sudo[202398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvylrzdiderfdrwvusqmyqsmjcpityjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003965.0572498-778-5756098653459/AnsiballZ_copy.py'
Jan 21 13:59:25 compute-0 sudo[202398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:26 compute-0 python3.9[202400]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003965.0572498-778-5756098653459/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:26 compute-0 sudo[202398]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:26 compute-0 sudo[202550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyidyrbjeiahopyfquvbytnbxpcrqvqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003966.368809-778-266475788483077/AnsiballZ_stat.py'
Jan 21 13:59:26 compute-0 sudo[202550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:26 compute-0 python3.9[202552]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:26 compute-0 sudo[202550]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:27 compute-0 sudo[202673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgezbzkgrbfwtpquktflebxubsarkwcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003966.368809-778-266475788483077/AnsiballZ_copy.py'
Jan 21 13:59:27 compute-0 sudo[202673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:27 compute-0 ceph-mon[75031]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:27 compute-0 python3.9[202675]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003966.368809-778-266475788483077/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:27 compute-0 sudo[202673]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:27 compute-0 sudo[202825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhumxmppjwxeuioqnqwdzvkdojyigaer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003967.5740592-778-231667736682513/AnsiballZ_stat.py'
Jan 21 13:59:27 compute-0 sudo[202825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:28 compute-0 python3.9[202827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:28 compute-0 sudo[202825]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:28 compute-0 sudo[202948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwmixruywlytpvawmtpmaxnlpstthiqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003967.5740592-778-231667736682513/AnsiballZ_copy.py'
Jan 21 13:59:28 compute-0 sudo[202948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:28 compute-0 python3.9[202950]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003967.5740592-778-231667736682513/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:28 compute-0 sudo[202948]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:29 compute-0 sudo[203100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osrbfsddjffokgatsvwaxddvrtkiikoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003968.7920122-778-26135731959570/AnsiballZ_stat.py'
Jan 21 13:59:29 compute-0 sudo[203100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:29 compute-0 python3.9[203102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:29 compute-0 sudo[203100]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:29 compute-0 ceph-mon[75031]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:29 compute-0 sudo[203223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjonsntylhujcivustqhtdwhmwcoondh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003968.7920122-778-26135731959570/AnsiballZ_copy.py'
Jan 21 13:59:29 compute-0 sudo[203223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:29 compute-0 python3.9[203225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003968.7920122-778-26135731959570/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:29 compute-0 sudo[203223]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:30 compute-0 sudo[203375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwlyfjotvcqnnhshtbueofwmsucktji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003970.0081892-778-101188716641056/AnsiballZ_stat.py'
Jan 21 13:59:30 compute-0 sudo[203375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:30 compute-0 python3.9[203377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:30 compute-0 sudo[203375]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:31 compute-0 sudo[203515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuadqhebqummuajiowxivejcbzykguou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003970.0081892-778-101188716641056/AnsiballZ_copy.py'
Jan 21 13:59:31 compute-0 sudo[203515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:31 compute-0 podman[203472]: 2026-01-21 13:59:31.07837091 +0000 UTC m=+0.102776256 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:59:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:31 compute-0 ceph-mon[75031]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:31 compute-0 python3.9[203521]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003970.0081892-778-101188716641056/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:31 compute-0 sudo[203515]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:31 compute-0 sudo[203676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwaxktulebmkhzgkqzrupuhxyldnmrhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003971.4297714-778-276506536456901/AnsiballZ_stat.py'
Jan 21 13:59:31 compute-0 sudo[203676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:31 compute-0 python3.9[203678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:31 compute-0 sudo[203676]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:32 compute-0 sudo[203799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgfcuqfaljgxnlpmidzvalcxfwfcupzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003971.4297714-778-276506536456901/AnsiballZ_copy.py'
Jan 21 13:59:32 compute-0 sudo[203799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:32 compute-0 python3.9[203801]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003971.4297714-778-276506536456901/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:32 compute-0 sudo[203799]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:33 compute-0 sudo[203951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lykrlcfjinpqqjgheicnsnzscvclxsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003972.7090404-778-121441671472826/AnsiballZ_stat.py'
Jan 21 13:59:33 compute-0 sudo[203951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:33 compute-0 python3.9[203953]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:33 compute-0 sudo[203951]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:33 compute-0 ceph-mon[75031]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:33 compute-0 sudo[204074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riujjicufgkosqblhlbjgmtryaqsxsde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003972.7090404-778-121441671472826/AnsiballZ_copy.py'
Jan 21 13:59:33 compute-0 sudo[204074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:33 compute-0 python3.9[204076]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003972.7090404-778-121441671472826/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:33 compute-0 sudo[204074]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:59:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 13:59:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:59:33.888 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 13:59:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 13:59:33.889 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 13:59:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:34 compute-0 sudo[204226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqamxsvlegqlqhqptaiifypoeopsxtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003974.0347362-778-10794286874459/AnsiballZ_stat.py'
Jan 21 13:59:34 compute-0 sudo[204226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:34 compute-0 python3.9[204228]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:34 compute-0 sudo[204226]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:35 compute-0 sudo[204349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvcsqbozzjhyaeyljkajwprzjgvjnjnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003974.0347362-778-10794286874459/AnsiballZ_copy.py'
Jan 21 13:59:35 compute-0 sudo[204349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:35 compute-0 ceph-mon[75031]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:35 compute-0 python3.9[204351]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003974.0347362-778-10794286874459/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:35 compute-0 sudo[204349]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:35 compute-0 sudo[204501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbqvqjanbsrtcndlkxnvgywzppmzyyiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003975.4912531-778-199784469938911/AnsiballZ_stat.py'
Jan 21 13:59:35 compute-0 sudo[204501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:35 compute-0 python3.9[204503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:35 compute-0 sudo[204501]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:36 compute-0 sudo[204624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkfugkhiozkphmajrichqvdmxcnuhnqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003975.4912531-778-199784469938911/AnsiballZ_copy.py'
Jan 21 13:59:36 compute-0 sudo[204624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:36 compute-0 python3.9[204626]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769003975.4912531-778-199784469938911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:36 compute-0 sudo[204624]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:37 compute-0 podman[204750]: 2026-01-21 13:59:37.145484295 +0000 UTC m=+0.068636117 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 13:59:37 compute-0 python3.9[204788]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:59:37 compute-0 auditd[698]: Audit daemon rotating log files
Jan 21 13:59:37 compute-0 ceph-mon[75031]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:38 compute-0 sudo[204947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lncuultgdjdgpuqvtdihadnryyakicyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003977.5732214-984-223615695123209/AnsiballZ_seboolean.py'
Jan 21 13:59:38 compute-0 sudo[204947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:38 compute-0 python3.9[204949]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 21 13:59:38 compute-0 ceph-mon[75031]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_13:59:39
Jan 21 13:59:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 13:59:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 13:59:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'backups', 'images']
Jan 21 13:59:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 13:59:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:40 compute-0 sudo[204947]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:40 compute-0 sudo[205103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swoqkkdkdhveptabgmljjvdlokoajivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003980.2298665-992-63223201497700/AnsiballZ_copy.py'
Jan 21 13:59:40 compute-0 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 21 13:59:40 compute-0 sudo[205103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:40 compute-0 python3.9[205105]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:40 compute-0 sudo[205103]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 13:59:41 compute-0 ceph-mon[75031]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 13:59:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:41 compute-0 sudo[205255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lljarzmrequrvdyfkflpwdlndwbahlbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003980.9192994-992-265504226252693/AnsiballZ_copy.py'
Jan 21 13:59:41 compute-0 sudo[205255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:41 compute-0 python3.9[205257]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:41 compute-0 sudo[205255]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:41 compute-0 sudo[205407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhdeljqkjnpylljktaxdbulrxfeynzwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003981.539926-992-210113017748516/AnsiballZ_copy.py'
Jan 21 13:59:41 compute-0 sudo[205407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:42 compute-0 python3.9[205409]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:42 compute-0 sudo[205407]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:42 compute-0 sudo[205559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoskteupcaktsluawqpncrrhrnwkaakc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003982.1866827-992-70517046297513/AnsiballZ_copy.py'
Jan 21 13:59:42 compute-0 sudo[205559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:42 compute-0 python3.9[205561]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:42 compute-0 sudo[205559]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:43 compute-0 sudo[205711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srzyksgwvgczmifbzabxkocqhgayuazh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003982.9026299-992-87460694860666/AnsiballZ_copy.py'
Jan 21 13:59:43 compute-0 sudo[205711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:43 compute-0 ceph-mon[75031]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:43 compute-0 python3.9[205713]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:43 compute-0 sudo[205711]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:43 compute-0 sudo[205863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idhhoxxkwjnqdutxlwsczvheheqcpfix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003983.6531668-1028-33531503443375/AnsiballZ_copy.py'
Jan 21 13:59:43 compute-0 sudo[205863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:44 compute-0 python3.9[205865]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:44 compute-0 sudo[205863]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:44 compute-0 sudo[206015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucoakjcsajxisbpbxyhmssrbgdbmzucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003984.3688834-1028-243282965675858/AnsiballZ_copy.py'
Jan 21 13:59:44 compute-0 sudo[206015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:44 compute-0 ceph-mon[75031]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:44 compute-0 python3.9[206017]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:44 compute-0 sudo[206015]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:45 compute-0 sudo[206167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skqkjtsiatlbesijsxchfnetcdckdwwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003985.038708-1028-60674723554939/AnsiballZ_copy.py'
Jan 21 13:59:45 compute-0 sudo[206167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:45 compute-0 python3.9[206169]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:45 compute-0 sudo[206167]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:46 compute-0 sudo[206319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztvbzpidknnjefzrltwtvbydjmisouha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003985.7552643-1028-165859214830402/AnsiballZ_copy.py'
Jan 21 13:59:46 compute-0 sudo[206319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:46 compute-0 python3.9[206321]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:46 compute-0 sudo[206319]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:46 compute-0 sudo[206471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobjmgmbjlsgfyqogolmkezwbjrrzwhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003986.5874002-1028-70055909972250/AnsiballZ_copy.py'
Jan 21 13:59:46 compute-0 sudo[206471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:47 compute-0 python3.9[206473]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:47 compute-0 sudo[206471]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:47 compute-0 ceph-mon[75031]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:47 compute-0 sudo[206623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqgtufnvpzepcjaijwgcjlrnxqcswdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003987.3148794-1064-60132972672369/AnsiballZ_systemd.py'
Jan 21 13:59:47 compute-0 sudo[206623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:47 compute-0 python3.9[206625]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:59:47 compute-0 systemd[1]: Reloading.
Jan 21 13:59:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:48 compute-0 systemd-rc-local-generator[206653]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:59:48 compute-0 systemd-sysv-generator[206656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:59:48 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 21 13:59:48 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 21 13:59:48 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 21 13:59:48 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 21 13:59:48 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 21 13:59:48 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 21 13:59:48 compute-0 sudo[206623]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:48 compute-0 ceph-mon[75031]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:48 compute-0 sudo[206816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psvbrhsrkgfesobqcwodjtxxgvljrtuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003988.6117594-1064-202574741761758/AnsiballZ_systemd.py'
Jan 21 13:59:48 compute-0 sudo[206816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:49 compute-0 python3.9[206818]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:59:49 compute-0 systemd[1]: Reloading.
Jan 21 13:59:49 compute-0 systemd-rc-local-generator[206841]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:59:49 compute-0 systemd-sysv-generator[206849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:59:49 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 21 13:59:49 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 21 13:59:49 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 21 13:59:49 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 21 13:59:49 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 21 13:59:49 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 21 13:59:49 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 13:59:49 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 21 13:59:49 compute-0 sudo[206816]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:50 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 21 13:59:50 compute-0 sudo[207033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxspzgxluxnkrifictfearyinvcqrwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003989.9346104-1064-157545416982980/AnsiballZ_systemd.py'
Jan 21 13:59:50 compute-0 sudo[207033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:50 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 13:59:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 13:59:50 compute-0 python3.9[207035]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:59:50 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 21 13:59:50 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 21 13:59:50 compute-0 systemd[1]: Reloading.
Jan 21 13:59:50 compute-0 systemd-rc-local-generator[207068]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:59:50 compute-0 systemd-sysv-generator[207073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:59:50 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 21 13:59:50 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 21 13:59:50 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 21 13:59:50 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 21 13:59:50 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 21 13:59:51 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 21 13:59:51 compute-0 sudo[207033]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:51 compute-0 ceph-mon[75031]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:51 compute-0 sudo[207254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fapvxgitugkrjtsnbhubfayvrmztevrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003991.2142437-1064-184450708919842/AnsiballZ_systemd.py'
Jan 21 13:59:51 compute-0 sudo[207254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:51 compute-0 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a05a998e-ab93-4a8a-b3db-5ee3c9b943d9
Jan 21 13:59:51 compute-0 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 21 13:59:51 compute-0 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a05a998e-ab93-4a8a-b3db-5ee3c9b943d9
Jan 21 13:59:51 compute-0 setroubleshoot[206962]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 21 13:59:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:51 compute-0 python3.9[207256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:59:51 compute-0 systemd[1]: Reloading.
Jan 21 13:59:51 compute-0 systemd-rc-local-generator[207284]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:59:51 compute-0 systemd-sysv-generator[207287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:59:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:52 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 21 13:59:52 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 21 13:59:52 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 21 13:59:52 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 21 13:59:52 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 21 13:59:52 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 21 13:59:52 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 21 13:59:52 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 21 13:59:52 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 21 13:59:52 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 21 13:59:52 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 13:59:52 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 21 13:59:52 compute-0 sudo[207254]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:53 compute-0 sudo[207469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrabkabamycqjrzudfdsxznzfjwjltw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003992.7550385-1064-210922653549186/AnsiballZ_systemd.py'
Jan 21 13:59:53 compute-0 sudo[207469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:53 compute-0 python3.9[207471]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 13:59:53 compute-0 systemd[1]: Reloading.
Jan 21 13:59:53 compute-0 ceph-mon[75031]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:53 compute-0 systemd-sysv-generator[207501]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 13:59:53 compute-0 systemd-rc-local-generator[207496]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 13:59:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:54 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 21 13:59:54 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 21 13:59:54 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 21 13:59:54 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 21 13:59:54 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 21 13:59:54 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 21 13:59:54 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 21 13:59:54 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 21 13:59:54 compute-0 sudo[207469]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:54 compute-0 sudo[207681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tksmmiyrobimllpypgegfyrhvmzsoqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003994.5411947-1101-156292060205842/AnsiballZ_file.py'
Jan 21 13:59:54 compute-0 sudo[207681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:55 compute-0 python3.9[207683]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:55 compute-0 sudo[207681]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:55 compute-0 sudo[207833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwgwipykvlqwecagjrewlvqqhjemvqai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003995.2782092-1109-169405792364170/AnsiballZ_find.py'
Jan 21 13:59:55 compute-0 sudo[207833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:55 compute-0 python3.9[207835]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 13:59:55 compute-0 sudo[207833]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:55 compute-0 ceph-mon[75031]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:56 compute-0 sudo[207985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwvdysnvlifpodmnuqkpijwgfggcbmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003996.0269206-1117-3545773031276/AnsiballZ_command.py'
Jan 21 13:59:56 compute-0 sudo[207985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:56 compute-0 python3.9[207987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:59:56 compute-0 sudo[207985]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 13:59:56 compute-0 ceph-mon[75031]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:57 compute-0 python3.9[208141]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 13:59:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:58 compute-0 python3.9[208291]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 13:59:58 compute-0 python3.9[208412]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769003997.6639302-1136-173352602892668/.source.xml follow=False _original_basename=secret.xml.j2 checksum=d27a26758af4fbf69deaa7c87560773282374616 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 13:59:58 compute-0 ceph-mon[75031]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:59 compute-0 sudo[208562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxzjzaheglxptzaqppvvetqkucnzhzqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769003998.8138478-1151-129560194493604/AnsiballZ_command.py'
Jan 21 13:59:59 compute-0 sudo[208562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 13:59:59 compute-0 python3.9[208564]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 2f0e9cad-f0a3-5869-9cc3-8d84d071866a
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 13:59:59 compute-0 sudo[208565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 13:59:59 compute-0 sudo[208565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:59:59 compute-0 sudo[208565]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:59 compute-0 polkitd[43343]: Registered Authentication Agent for unix-process:208590:332135 (system bus name :1.2544 [pkttyagent --process 208590 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 21 13:59:59 compute-0 polkitd[43343]: Unregistered Authentication Agent for unix-process:208590:332135 (system bus name :1.2544, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 21 13:59:59 compute-0 sudo[208592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 13:59:59 compute-0 sudo[208592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 13:59:59 compute-0 polkitd[43343]: Registered Authentication Agent for unix-process:208588:332134 (system bus name :1.2546 [pkttyagent --process 208588 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 21 13:59:59 compute-0 polkitd[43343]: Unregistered Authentication Agent for unix-process:208588:332134 (system bus name :1.2546, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 21 13:59:59 compute-0 sudo[208562]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:59 compute-0 sudo[208592]: pam_unix(sudo:session): session closed for user root
Jan 21 13:59:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:59:59 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 13:59:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 13:59:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:59:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 13:59:59 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 13:59:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 13:59:59 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 13:59:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 13:59:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 13:59:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:00:00 compute-0 sudo[208806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:00:00 compute-0 sudo[208806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:00 compute-0 sudo[208806]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:00 compute-0 python3.9[208801]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:00 compute-0 sudo[208831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:00:00 compute-0 sudo[208831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.336781078 +0000 UTC m=+0.038995717 container create 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 14:00:00 compute-0 systemd[1]: Started libpod-conmon-8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96.scope.
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.318602367 +0000 UTC m=+0.020816916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:00:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:00:00 compute-0 sudo[209035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyucwgjlrnrpveootofwitpyqrtjypbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004000.2313464-1167-173990628185612/AnsiballZ_command.py'
Jan 21 14:00:00 compute-0 sudo[209035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.613968612 +0000 UTC m=+0.316183161 container init 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.627248614 +0000 UTC m=+0.329463143 container start 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.633120637 +0000 UTC m=+0.335335196 container attach 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 14:00:00 compute-0 unruffled_davinci[208982]: 167 167
Jan 21 14:00:00 compute-0 systemd[1]: libpod-8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96.scope: Deactivated successfully.
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.636421956 +0000 UTC m=+0.338636505 container died 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 14:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5d18cb8f9a568bfd15fd8debeb9c85ac261f492232d2c36ba7c3b439ddf9cc-merged.mount: Deactivated successfully.
Jan 21 14:00:00 compute-0 podman[208943]: 2026-01-21 14:00:00.70125348 +0000 UTC m=+0.403468039 container remove 8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:00:00 compute-0 systemd[1]: libpod-conmon-8ae3da5faaf5b99759a8e66ba161636469ee1e0e2b71cbcec019cf7977a10a96.scope: Deactivated successfully.
Jan 21 14:00:00 compute-0 sudo[209035]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:00 compute-0 podman[209083]: 2026-01-21 14:00:00.95233544 +0000 UTC m=+0.065423818 container create c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:00:01 compute-0 ceph-mon[75031]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:01 compute-0 systemd[1]: Started libpod-conmon-c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333.scope.
Jan 21 14:00:01 compute-0 podman[209083]: 2026-01-21 14:00:00.926176265 +0000 UTC m=+0.039264653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:00:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:01 compute-0 podman[209083]: 2026-01-21 14:00:01.064334367 +0000 UTC m=+0.177422735 container init c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:00:01 compute-0 podman[209083]: 2026-01-21 14:00:01.072340071 +0000 UTC m=+0.185428409 container start c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:00:01 compute-0 podman[209083]: 2026-01-21 14:00:01.07725707 +0000 UTC m=+0.190345438 container attach c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 14:00:01 compute-0 sudo[209243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teavtoomyelpblargdhnxqxtszytvtga ; FSID=2f0e9cad-f0a3-5869-9cc3-8d84d071866a KEY=AQAK2HBpAAAAABAAhSWZ4orU8dfgZu1d3brE9g== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004000.9863403-1175-264273292904266/AnsiballZ_command.py'
Jan 21 14:00:01 compute-0 sudo[209243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:01 compute-0 podman[209205]: 2026-01-21 14:00:01.365084542 +0000 UTC m=+0.084797628 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 21 14:00:01 compute-0 frosty_moser[209131]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:00:01 compute-0 frosty_moser[209131]: --> All data devices are unavailable
Jan 21 14:00:01 compute-0 polkitd[43343]: Registered Authentication Agent for unix-process:209274:332361 (system bus name :1.2558 [pkttyagent --process 209274 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 21 14:00:01 compute-0 systemd[1]: libpod-c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333.scope: Deactivated successfully.
Jan 21 14:00:01 compute-0 podman[209083]: 2026-01-21 14:00:01.549033384 +0000 UTC m=+0.662121742 container died c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 14:00:01 compute-0 polkitd[43343]: Unregistered Authentication Agent for unix-process:209274:332361 (system bus name :1.2558, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 21 14:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-175b0a69f3b05a1b01ee7890ac6a5861a7a1b97c50e4a9a8bd2ee75c71e62b9b-merged.mount: Deactivated successfully.
Jan 21 14:00:01 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 21 14:00:01 compute-0 sudo[209243]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:01 compute-0 podman[209083]: 2026-01-21 14:00:01.627133729 +0000 UTC m=+0.740222067 container remove c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:00:01 compute-0 systemd[1]: libpod-conmon-c7e10530165db9436b97ec58a1fe6101eae5635063f07bd67520aa9ec28c7333.scope: Deactivated successfully.
Jan 21 14:00:01 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 21 14:00:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:01 compute-0 sudo[208831]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:01 compute-0 sudo[209315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:00:01 compute-0 sudo[209315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:01 compute-0 sudo[209315]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:01 compute-0 sudo[209340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:00:01 compute-0 sudo[209340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.121835259 +0000 UTC m=+0.039073099 container create 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:00:02 compute-0 systemd[1]: Started libpod-conmon-31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958.scope.
Jan 21 14:00:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.19113869 +0000 UTC m=+0.108376550 container init 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.199074772 +0000 UTC m=+0.116312602 container start 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.103877263 +0000 UTC m=+0.021115123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.203633114 +0000 UTC m=+0.120870944 container attach 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:00:02 compute-0 tender_wu[209486]: 167 167
Jan 21 14:00:02 compute-0 systemd[1]: libpod-31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958.scope: Deactivated successfully.
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.20639116 +0000 UTC m=+0.123629010 container died 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 14:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0158c1f9d7e8195af63308bf233b0e5c4fba6dbe7d0f343569df97a674ee92e6-merged.mount: Deactivated successfully.
Jan 21 14:00:02 compute-0 sudo[209525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeedluyzcwcpiysiuffyoqrlcaqmzkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004001.8895268-1183-180910851103841/AnsiballZ_copy.py'
Jan 21 14:00:02 compute-0 sudo[209525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:02 compute-0 podman[209441]: 2026-01-21 14:00:02.247737673 +0000 UTC m=+0.164975513 container remove 31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:00:02 compute-0 systemd[1]: libpod-conmon-31b9d88c8cac842344441f1f17bd7ecf32c370ab602af17476224e2bb1e05958.scope: Deactivated successfully.
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.399183246 +0000 UTC m=+0.046371675 container create cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 14:00:02 compute-0 python3.9[209536]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:02 compute-0 sudo[209525]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:02 compute-0 systemd[1]: Started libpod-conmon-cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3.scope.
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.376980308 +0000 UTC m=+0.024168757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:00:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.502650986 +0000 UTC m=+0.149839425 container init cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.510150189 +0000 UTC m=+0.157338608 container start cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.513742636 +0000 UTC m=+0.160931045 container attach cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 14:00:02 compute-0 busy_fermi[209561]: {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:     "0": [
Jan 21 14:00:02 compute-0 busy_fermi[209561]:         {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "devices": [
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "/dev/loop3"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             ],
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_name": "ceph_lv0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_size": "21470642176",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "name": "ceph_lv0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "tags": {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cluster_name": "ceph",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.crush_device_class": "",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.encrypted": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.objectstore": "bluestore",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osd_id": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.type": "block",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.vdo": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.with_tpm": "0"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             },
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "type": "block",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "vg_name": "ceph_vg0"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:         }
Jan 21 14:00:02 compute-0 busy_fermi[209561]:     ],
Jan 21 14:00:02 compute-0 busy_fermi[209561]:     "1": [
Jan 21 14:00:02 compute-0 busy_fermi[209561]:         {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "devices": [
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "/dev/loop4"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             ],
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_name": "ceph_lv1",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_size": "21470642176",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "name": "ceph_lv1",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "tags": {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cluster_name": "ceph",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.crush_device_class": "",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.encrypted": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.objectstore": "bluestore",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osd_id": "1",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.type": "block",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.vdo": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.with_tpm": "0"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             },
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "type": "block",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "vg_name": "ceph_vg1"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:         }
Jan 21 14:00:02 compute-0 busy_fermi[209561]:     ],
Jan 21 14:00:02 compute-0 busy_fermi[209561]:     "2": [
Jan 21 14:00:02 compute-0 busy_fermi[209561]:         {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "devices": [
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "/dev/loop5"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             ],
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_name": "ceph_lv2",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_size": "21470642176",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "name": "ceph_lv2",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "tags": {
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.cluster_name": "ceph",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.crush_device_class": "",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.encrypted": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.objectstore": "bluestore",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osd_id": "2",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.type": "block",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.vdo": "0",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:                 "ceph.with_tpm": "0"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             },
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "type": "block",
Jan 21 14:00:02 compute-0 busy_fermi[209561]:             "vg_name": "ceph_vg2"
Jan 21 14:00:02 compute-0 busy_fermi[209561]:         }
Jan 21 14:00:02 compute-0 busy_fermi[209561]:     ]
Jan 21 14:00:02 compute-0 busy_fermi[209561]: }
Jan 21 14:00:02 compute-0 systemd[1]: libpod-cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3.scope: Deactivated successfully.
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.83505332 +0000 UTC m=+0.482241729 container died cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 14:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ab0539bf06042ba658bf31d516618844bab3549cd0972d364b4a78beba41765-merged.mount: Deactivated successfully.
Jan 21 14:00:02 compute-0 podman[209544]: 2026-01-21 14:00:02.876861494 +0000 UTC m=+0.524049893 container remove cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 14:00:02 compute-0 systemd[1]: libpod-conmon-cdb5a5ad0ba50be8c82a056c2c3ad34207ef3a08743f4bbc15f6b162233777b3.scope: Deactivated successfully.
Jan 21 14:00:02 compute-0 sudo[209340]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:02 compute-0 sudo[209731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdikwehgxrdwhpfwxpxpuxnyntgwecfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004002.643615-1191-82189255848235/AnsiballZ_stat.py'
Jan 21 14:00:02 compute-0 sudo[209731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:02 compute-0 sudo[209732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:00:02 compute-0 sudo[209732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:02 compute-0 sudo[209732]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:03 compute-0 ceph-mon[75031]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:03 compute-0 sudo[209759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:00:03 compute-0 sudo[209759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:03 compute-0 python3.9[209750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:03 compute-0 sudo[209731]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:03 compute-0 podman[209819]: 2026-01-21 14:00:03.263923183 +0000 UTC m=+0.020633562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:00:03 compute-0 sudo[209930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwdkifeyyeyslhunijjamsxhpipxxnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004002.643615-1191-82189255848235/AnsiballZ_copy.py'
Jan 21 14:00:03 compute-0 sudo[209930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:03 compute-0 python3.9[209932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004002.643615-1191-82189255848235/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:03 compute-0 sudo[209930]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:04 compute-0 sudo[210082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncnpdkbvqegbqkhagxqyrhilachkrcxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004004.1039257-1207-142839330613532/AnsiballZ_file.py'
Jan 21 14:00:04 compute-0 sudo[210082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:04 compute-0 python3.9[210084]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:04 compute-0 sudo[210082]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:04 compute-0 podman[209819]: 2026-01-21 14:00:04.693805918 +0000 UTC m=+1.450516317 container create da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:00:04 compute-0 systemd[1]: Started libpod-conmon-da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735.scope.
Jan 21 14:00:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:00:04 compute-0 podman[209819]: 2026-01-21 14:00:04.806742398 +0000 UTC m=+1.563452757 container init da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:00:04 compute-0 podman[209819]: 2026-01-21 14:00:04.813913782 +0000 UTC m=+1.570624141 container start da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:00:04 compute-0 podman[209819]: 2026-01-21 14:00:04.817789166 +0000 UTC m=+1.574499555 container attach da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:00:04 compute-0 wonderful_almeida[210125]: 167 167
Jan 21 14:00:04 compute-0 systemd[1]: libpod-da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735.scope: Deactivated successfully.
Jan 21 14:00:04 compute-0 podman[209819]: 2026-01-21 14:00:04.819452806 +0000 UTC m=+1.576163165 container died da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 14:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4d3263bf935fea339234e4e0f5f88ec0c6ed56bae36d05edea5fd44c4951d27-merged.mount: Deactivated successfully.
Jan 21 14:00:04 compute-0 podman[209819]: 2026-01-21 14:00:04.901805974 +0000 UTC m=+1.658516333 container remove da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:00:04 compute-0 systemd[1]: libpod-conmon-da2d9f6263f771e5bf1b73851d316bd64c285e62f55b20b2126a7abd2f293735.scope: Deactivated successfully.
Jan 21 14:00:05 compute-0 sudo[210268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlfmwfzobvcddkgrhgidlwcofseueqyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004004.7861533-1215-161461967490069/AnsiballZ_stat.py'
Jan 21 14:00:05 compute-0 sudo[210268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:05 compute-0 podman[210259]: 2026-01-21 14:00:05.066785446 +0000 UTC m=+0.025779777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:00:05 compute-0 podman[210259]: 2026-01-21 14:00:05.252280385 +0000 UTC m=+0.211274706 container create 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:00:05 compute-0 systemd[1]: Started libpod-conmon-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope.
Jan 21 14:00:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:00:05 compute-0 python3.9[210277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:05 compute-0 sudo[210268]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:05 compute-0 sudo[210358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmgglmiwmijshmweivcmwxuxpahsexc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004004.7861533-1215-161461967490069/AnsiballZ_file.py'
Jan 21 14:00:05 compute-0 sudo[210358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:05 compute-0 python3.9[210360]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:05 compute-0 sudo[210358]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:06 compute-0 sudo[210510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpenptcscwljsjjeyqbswohglxmnfffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004006.125264-1227-19166934779559/AnsiballZ_stat.py'
Jan 21 14:00:06 compute-0 sudo[210510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:06 compute-0 python3.9[210512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:06 compute-0 sudo[210510]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:06 compute-0 sudo[210588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfegswhwqpgdbplxnzlqjesclxbphyvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004006.125264-1227-19166934779559/AnsiballZ_file.py'
Jan 21 14:00:06 compute-0 sudo[210588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:07 compute-0 python3.9[210590]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2m4smt3m recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:07 compute-0 sudo[210588]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:07 compute-0 sudo[210750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmzpyrgewhmgozjecqqxrqbjiytrjypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004007.3479636-1239-111402601794116/AnsiballZ_stat.py'
Jan 21 14:00:07 compute-0 sudo[210750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:07 compute-0 python3.9[210752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:07 compute-0 sudo[210750]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:08 compute-0 sudo[210828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmnydyfrjxiposwucwjxddsbubvroew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004007.3479636-1239-111402601794116/AnsiballZ_file.py'
Jan 21 14:00:08 compute-0 sudo[210828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:08 compute-0 podman[210259]: 2026-01-21 14:00:08.236014643 +0000 UTC m=+3.195009024 container init 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 14:00:08 compute-0 podman[210259]: 2026-01-21 14:00:08.249137591 +0000 UTC m=+3.208131872 container start 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:00:08 compute-0 ceph-mon[75031]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:08 compute-0 python3.9[210830]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:08 compute-0 sudo[210828]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:08 compute-0 podman[210259]: 2026-01-21 14:00:08.458008329 +0000 UTC m=+3.417002630 container attach 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:00:08 compute-0 podman[210591]: 2026-01-21 14:00:08.581466533 +0000 UTC m=+1.386294209 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 21 14:00:08 compute-0 sudo[211048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djqsvmvjaojsrfyofxynqcrvgpnzhzox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004008.5718808-1252-141241861028650/AnsiballZ_command.py'
Jan 21 14:00:08 compute-0 sudo[211048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:08 compute-0 lvm[211062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:00:08 compute-0 lvm[211062]: VG ceph_vg0 finished
Jan 21 14:00:08 compute-0 lvm[211065]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:00:08 compute-0 lvm[211065]: VG ceph_vg1 finished
Jan 21 14:00:09 compute-0 python3.9[211053]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:00:09 compute-0 lvm[211067]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:00:09 compute-0 lvm[211067]: VG ceph_vg2 finished
Jan 21 14:00:09 compute-0 sudo[211048]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:09 compute-0 bold_mclaren[210280]: {}
Jan 21 14:00:09 compute-0 podman[210259]: 2026-01-21 14:00:09.138349881 +0000 UTC m=+4.097344162 container died 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:00:09 compute-0 systemd[1]: libpod-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope: Deactivated successfully.
Jan 21 14:00:09 compute-0 systemd[1]: libpod-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope: Consumed 1.388s CPU time.
Jan 21 14:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3143cfd564c008493c89118ae422f60ae3be19134b42e9bfdd696da06a808a-merged.mount: Deactivated successfully.
Jan 21 14:00:09 compute-0 podman[210259]: 2026-01-21 14:00:09.245761037 +0000 UTC m=+4.204755318 container remove 76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mclaren, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:00:09 compute-0 systemd[1]: libpod-conmon-76d3bdc7cd13eb286e3e89cf0365db8360fc771810d8ae8d646fad688fad984f.scope: Deactivated successfully.
Jan 21 14:00:09 compute-0 sudo[209759]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:09 compute-0 sudo[211232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olsahazulikzrcwikqfxhsetejeapfqe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769004009.2344184-1260-223708819542337/AnsiballZ_edpm_nftables_from_files.py'
Jan 21 14:00:09 compute-0 sudo[211232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:00:09 compute-0 ceph-mon[75031]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:09 compute-0 ceph-mon[75031]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:10 compute-0 python3[211234]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 21 14:00:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:00:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:00:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:00:10 compute-0 sudo[211232]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:10 compute-0 sudo[211235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:00:10 compute-0 sudo[211235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:00:10 compute-0 sudo[211235]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:10 compute-0 sudo[211409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbveehqrpwrhzetdgbmgntjhtbuglie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004010.4109554-1268-76841874477737/AnsiballZ_stat.py'
Jan 21 14:00:10 compute-0 sudo[211409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:10 compute-0 python3.9[211411]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:10 compute-0 sudo[211409]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:00:11 compute-0 ceph-mon[75031]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:00:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:00:11 compute-0 sudo[211487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjchhidmbywaxvrhezfcniacxsckhsuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004010.4109554-1268-76841874477737/AnsiballZ_file.py'
Jan 21 14:00:11 compute-0 sudo[211487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:11 compute-0 python3.9[211489]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:11 compute-0 sudo[211487]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:11 compute-0 sudo[211639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxwrimvlwugbzyugfetbxmrajnvlcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004011.6578555-1280-6280688022404/AnsiballZ_stat.py'
Jan 21 14:00:11 compute-0 sudo[211639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:12 compute-0 python3.9[211641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:12 compute-0 sudo[211639]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:12 compute-0 sudo[211764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbkqqgwkztgmgvlnzlqpdvnpdhzmjumd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004011.6578555-1280-6280688022404/AnsiballZ_copy.py'
Jan 21 14:00:12 compute-0 sudo[211764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:12 compute-0 python3.9[211766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769004011.6578555-1280-6280688022404/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:12 compute-0 sudo[211764]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:13 compute-0 sudo[211916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwxgbtbjigghdysyvxsfnkgkimmiqebi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004012.9573076-1295-226625237188338/AnsiballZ_stat.py'
Jan 21 14:00:13 compute-0 sudo[211916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:13 compute-0 python3.9[211918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:13 compute-0 sudo[211916]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:13 compute-0 ceph-mon[75031]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:13 compute-0 sudo[211994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuidzhjaeqwfyaqxjvrwvaqzquwodpoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004012.9573076-1295-226625237188338/AnsiballZ_file.py'
Jan 21 14:00:13 compute-0 sudo[211994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:13 compute-0 python3.9[211996]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:13 compute-0 sudo[211994]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:14 compute-0 sudo[212146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzsiogbyibenbisbpamdtbqoueqtrbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004014.0900085-1307-168292345310499/AnsiballZ_stat.py'
Jan 21 14:00:14 compute-0 sudo[212146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:14 compute-0 python3.9[212148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:14 compute-0 sudo[212146]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:14 compute-0 sudo[212224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uylkgkpbskcubxmmiamulbgzbuyixygq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004014.0900085-1307-168292345310499/AnsiballZ_file.py'
Jan 21 14:00:14 compute-0 sudo[212224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:15 compute-0 python3.9[212226]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:15 compute-0 sudo[212224]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:15 compute-0 ceph-mon[75031]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:15 compute-0 sudo[212376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxndqvubhjsjparvlakkqrxrtryvetbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004015.248628-1319-196006270225572/AnsiballZ_stat.py'
Jan 21 14:00:15 compute-0 sudo[212376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:15 compute-0 python3.9[212378]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:15 compute-0 sudo[212376]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:16 compute-0 sudo[212501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tesvqgdxwwmfvpmyvtryqawbajpqjigp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004015.248628-1319-196006270225572/AnsiballZ_copy.py'
Jan 21 14:00:16 compute-0 sudo[212501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:16 compute-0 python3.9[212503]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769004015.248628-1319-196006270225572/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:16 compute-0 sudo[212501]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:16 compute-0 sudo[212653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueoiyklxlcfuowvjdfssqxwiwbhuewbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004016.697525-1334-7325314883251/AnsiballZ_file.py'
Jan 21 14:00:16 compute-0 sudo[212653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:17 compute-0 python3.9[212655]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:17 compute-0 sudo[212653]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:17 compute-0 ceph-mon[75031]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:17 compute-0 sudo[212805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrhftkjwsrdbrhpkbkquhhtfsymaphz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004017.3660462-1342-246422398509592/AnsiballZ_command.py'
Jan 21 14:00:17 compute-0 sudo[212805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:17 compute-0 python3.9[212807]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:00:17 compute-0 sudo[212805]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:18 compute-0 sudo[212960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effsnfvuflamkkumgpdmwvlcgjpnqxjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004018.0941808-1350-85287799134085/AnsiballZ_blockinfile.py'
Jan 21 14:00:18 compute-0 sudo[212960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:18 compute-0 python3.9[212962]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:18 compute-0 sudo[212960]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:19 compute-0 sudo[213112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmuolonuxistgvfkggbkujnvxbyqnxhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004019.0007257-1359-260015940773077/AnsiballZ_command.py'
Jan 21 14:00:19 compute-0 sudo[213112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:19 compute-0 python3.9[213114]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:00:19 compute-0 sudo[213112]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:19 compute-0 ceph-mon[75031]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:20 compute-0 sudo[213265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxoqzqweqgbnawustvvnzmdypfdnconi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004019.7648458-1367-253644659787491/AnsiballZ_stat.py'
Jan 21 14:00:20 compute-0 sudo[213265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:20 compute-0 python3.9[213267]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:00:20 compute-0 sudo[213265]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:20 compute-0 sudo[213419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pergfdjifpmcccwwukbvetrtiqfhlicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004020.4023068-1375-185347021122131/AnsiballZ_command.py'
Jan 21 14:00:20 compute-0 sudo[213419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:20 compute-0 python3.9[213421]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:00:20 compute-0 sudo[213419]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:21 compute-0 sudo[213574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxepvfcszvlncnfunykunmeybpnnqcwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004021.0453496-1383-39521260230690/AnsiballZ_file.py'
Jan 21 14:00:21 compute-0 sudo[213574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:21 compute-0 python3.9[213576]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:21 compute-0 sudo[213574]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:22 compute-0 ceph-mon[75031]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:22 compute-0 sudo[213726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxcywxfittsilmydgwedlqtghjghefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004021.9249754-1391-248770487732909/AnsiballZ_stat.py'
Jan 21 14:00:22 compute-0 sudo[213726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:22 compute-0 python3.9[213728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:22 compute-0 sudo[213726]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:22 compute-0 sudo[213849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olovbqtfbabvcikovlbtrklbegztbvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004021.9249754-1391-248770487732909/AnsiballZ_copy.py'
Jan 21 14:00:22 compute-0 sudo[213849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:23 compute-0 python3.9[213851]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004021.9249754-1391-248770487732909/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:23 compute-0 sudo[213849]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:23 compute-0 ceph-mon[75031]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:23 compute-0 sudo[214001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkcobltbpgwgngrvvrfmwgwqqglfbku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004023.2057672-1406-126552002965037/AnsiballZ_stat.py'
Jan 21 14:00:23 compute-0 sudo[214001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:23 compute-0 python3.9[214003]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:23 compute-0 sudo[214001]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:24 compute-0 sudo[214124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmrhkbyvdcewkebuqfjvipxaddzxsawr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004023.2057672-1406-126552002965037/AnsiballZ_copy.py'
Jan 21 14:00:24 compute-0 sudo[214124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:24 compute-0 python3.9[214126]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004023.2057672-1406-126552002965037/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:24 compute-0 sudo[214124]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:24 compute-0 sudo[214276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koxovjblqeavmiamnkvpfqaiqtkvryqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004024.5823274-1421-128822425754455/AnsiballZ_stat.py'
Jan 21 14:00:24 compute-0 sudo[214276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:25 compute-0 python3.9[214278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:25 compute-0 sudo[214276]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:25 compute-0 sudo[214399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxlgzcqkhaybydzwpkedmvwuhielbjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004024.5823274-1421-128822425754455/AnsiballZ_copy.py'
Jan 21 14:00:25 compute-0 sudo[214399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:25 compute-0 python3.9[214401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004024.5823274-1421-128822425754455/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:25 compute-0 sudo[214399]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:26 compute-0 sudo[214551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfvkgpzbjikjnmssiblvkmodilfytato ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004025.91878-1436-56475306569795/AnsiballZ_systemd.py'
Jan 21 14:00:26 compute-0 sudo[214551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:26 compute-0 python3.9[214553]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:00:26 compute-0 systemd[1]: Reloading.
Jan 21 14:00:26 compute-0 systemd-sysv-generator[214587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:00:26 compute-0 systemd-rc-local-generator[214582]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:00:27 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 21 14:00:27 compute-0 sudo[214551]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:27 compute-0 ceph-mon[75031]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:27 compute-0 sudo[214743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdszmchmggsyumhivumopdfaqqgatxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004027.621836-1444-90287771654767/AnsiballZ_systemd.py'
Jan 21 14:00:27 compute-0 sudo[214743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:28 compute-0 python3.9[214745]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 21 14:00:28 compute-0 systemd[1]: Reloading.
Jan 21 14:00:28 compute-0 systemd-rc-local-generator[214770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:00:28 compute-0 systemd-sysv-generator[214777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:00:28 compute-0 systemd[1]: Reloading.
Jan 21 14:00:28 compute-0 systemd-rc-local-generator[214809]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:00:28 compute-0 systemd-sysv-generator[214813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:00:28 compute-0 ceph-mon[75031]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:28 compute-0 ceph-mon[75031]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:29 compute-0 sudo[214743]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:29 compute-0 sshd-session[155881]: Connection closed by 192.168.122.30 port 53628
Jan 21 14:00:29 compute-0 sshd-session[155878]: pam_unix(sshd:session): session closed for user zuul
Jan 21 14:00:29 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 21 14:00:29 compute-0 systemd[1]: session-49.scope: Consumed 3min 38.078s CPU time.
Jan 21 14:00:29 compute-0 systemd-logind[780]: Session 49 logged out. Waiting for processes to exit.
Jan 21 14:00:29 compute-0 systemd-logind[780]: Removed session 49.
Jan 21 14:00:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:31 compute-0 ceph-mon[75031]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:32 compute-0 podman[214841]: 2026-01-21 14:00:32.39542782 +0000 UTC m=+0.111700411 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 14:00:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:33 compute-0 ceph-mon[75031]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:00:33.889 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:00:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:00:33.890 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:00:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:00:33.890 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:00:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:35 compute-0 ceph-mon[75031]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:38 compute-0 ceph-mon[75031]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:38 compute-0 sshd-session[214868]: Accepted publickey for zuul from 192.168.122.30 port 57222 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 14:00:38 compute-0 systemd-logind[780]: New session 50 of user zuul.
Jan 21 14:00:38 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 21 14:00:38 compute-0 sshd-session[214868]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 14:00:38 compute-0 podman[214870]: 2026-01-21 14:00:38.745306314 +0000 UTC m=+0.085710310 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 14:00:39 compute-0 ceph-mon[75031]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:00:39
Jan 21 14:00:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:00:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:00:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 21 14:00:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:00:39 compute-0 python3.9[215042]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 14:00:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:00:41 compute-0 python3.9[215196]: ansible-ansible.builtin.service_facts Invoked
Jan 21 14:00:41 compute-0 ceph-mon[75031]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:41 compute-0 network[215213]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 14:00:41 compute-0 network[215214]: 'network-scripts' will be removed from distribution in near future.
Jan 21 14:00:41 compute-0 network[215215]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 14:00:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:43 compute-0 ceph-mon[75031]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:45 compute-0 ceph-mon[75031]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:46 compute-0 sudo[215485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptewtvuswcxolihledhaoblkhumjanbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004046.1448703-42-17108711201595/AnsiballZ_setup.py'
Jan 21 14:00:46 compute-0 sudo[215485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:46 compute-0 python3.9[215487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 21 14:00:47 compute-0 sudo[215485]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:47 compute-0 sudo[215569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmqdovrcmwufbfinxewzgvtbscihzygy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004046.1448703-42-17108711201595/AnsiballZ_dnf.py'
Jan 21 14:00:47 compute-0 sudo[215569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:47 compute-0 ceph-mon[75031]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:47 compute-0 python3.9[215571]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 14:00:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:49 compute-0 ceph-mon[75031]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:00:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:00:51 compute-0 ceph-mon[75031]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:51 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:53 compute-0 ceph-mon[75031]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:53 compute-0 sudo[215569]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:53 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:54 compute-0 sudo[215722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wivrryahstljmbcgmmrilfaaypglovku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004053.7374303-54-137320564893396/AnsiballZ_stat.py'
Jan 21 14:00:54 compute-0 sudo[215722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:54 compute-0 python3.9[215724]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:00:54 compute-0 sudo[215722]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:55 compute-0 sudo[215874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtxvohspwjnaqymysnygcgjegsotpxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004054.9563437-64-152080949466641/AnsiballZ_command.py'
Jan 21 14:00:55 compute-0 sudo[215874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:55 compute-0 ceph-mon[75031]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:55 compute-0 python3.9[215876]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:00:55 compute-0 sudo[215874]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:55 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:56 compute-0 sudo[216027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiojrqdfswvthjomeynjudenarfyjilh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004055.87445-74-37092443287122/AnsiballZ_stat.py'
Jan 21 14:00:56 compute-0 sudo[216027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:56 compute-0 python3.9[216029]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:00:56 compute-0 sudo[216027]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:56 compute-0 sudo[216179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdqvrozofndmaibknuvdbtkwndeybjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004056.451907-82-134354518779348/AnsiballZ_command.py'
Jan 21 14:00:56 compute-0 sudo[216179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:56 compute-0 python3.9[216181]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:00:57 compute-0 sudo[216179]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:57 compute-0 sudo[216332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxobizisvhkuxhjxeybptyyckmdcyzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004057.167376-90-82637160701717/AnsiballZ_stat.py'
Jan 21 14:00:57 compute-0 sudo[216332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:57 compute-0 python3.9[216334]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:00:57 compute-0 sudo[216332]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:57 compute-0 ceph-mon[75031]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:57 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:58 compute-0 sudo[216455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqrfrvmiylehvpsesladdlppflyyienl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004057.167376-90-82637160701717/AnsiballZ_copy.py'
Jan 21 14:00:58 compute-0 sudo[216455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:58 compute-0 python3.9[216457]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004057.167376-90-82637160701717/.source.iscsi _original_basename=.2na6g9u2 follow=False checksum=2be7a59c6f4b810a0d44607c773617b0c858b872 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:58 compute-0 sudo[216455]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:00:59 compute-0 sudo[216607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdabluhfymqdtgnvubowxyitkwdrupci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004058.5718348-105-111112367145412/AnsiballZ_file.py'
Jan 21 14:00:59 compute-0 sudo[216607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:59 compute-0 ceph-mon[75031]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:00:59 compute-0 python3.9[216609]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:00:59 compute-0 sudo[216607]: pam_unix(sudo:session): session closed for user root
Jan 21 14:00:59 compute-0 sudo[216759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlerwycnjhndzgeprptogirverngnxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004059.4298985-113-30652604316679/AnsiballZ_lineinfile.py'
Jan 21 14:00:59 compute-0 sudo[216759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:00:59 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:00 compute-0 python3.9[216761]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:00 compute-0 sudo[216759]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:00 compute-0 sudo[216911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huztkgnsmiucokfakucisokldpbrkfyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004060.277168-122-36472279179403/AnsiballZ_systemd_service.py'
Jan 21 14:01:00 compute-0 sudo[216911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:01 compute-0 python3.9[216913]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:01 compute-0 CROND[216917]: (root) CMD (run-parts /etc/cron.hourly)
Jan 21 14:01:01 compute-0 run-parts[216921]: (/etc/cron.hourly) starting 0anacron
Jan 21 14:01:01 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 21 14:01:01 compute-0 ceph-mon[75031]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:01 compute-0 anacron[216930]: Anacron started on 2026-01-21
Jan 21 14:01:01 compute-0 anacron[216930]: Will run job `cron.daily' in 19 min.
Jan 21 14:01:01 compute-0 anacron[216930]: Will run job `cron.weekly' in 39 min.
Jan 21 14:01:01 compute-0 anacron[216930]: Will run job `cron.monthly' in 59 min.
Jan 21 14:01:01 compute-0 anacron[216930]: Jobs will be executed sequentially
Jan 21 14:01:01 compute-0 run-parts[216932]: (/etc/cron.hourly) finished 0anacron
Jan 21 14:01:01 compute-0 CROND[216916]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 21 14:01:01 compute-0 sudo[216911]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:01 compute-0 sudo[217082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxtbikxguezpnsfptcykwxnpkxvkkvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004061.4839816-130-269639972326882/AnsiballZ_systemd_service.py'
Jan 21 14:01:01 compute-0 sudo[217082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:01 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:02 compute-0 python3.9[217084]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:02 compute-0 systemd[1]: Reloading.
Jan 21 14:01:02 compute-0 systemd-rc-local-generator[217111]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:01:02 compute-0 systemd-sysv-generator[217118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:01:02 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 14:01:02 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 21 14:01:02 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 21 14:01:02 compute-0 systemd[1]: Started Open-iSCSI.
Jan 21 14:01:02 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 21 14:01:02 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 21 14:01:02 compute-0 sudo[217082]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:02 compute-0 podman[217124]: 2026-01-21 14:01:02.676448192 +0000 UTC m=+0.130489530 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:01:03 compute-0 ceph-mon[75031]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:03 compute-0 python3.9[217311]: ansible-ansible.builtin.service_facts Invoked
Jan 21 14:01:03 compute-0 network[217328]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 14:01:03 compute-0 network[217329]: 'network-scripts' will be removed from distribution in near future.
Jan 21 14:01:03 compute-0 network[217330]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 14:01:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:03 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:05 compute-0 ceph-mon[75031]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:05 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:07 compute-0 ceph-mon[75031]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:07 compute-0 sudo[217600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alyyyotseppjyajikhrurczhdsxixhwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004066.98071-153-203786324867478/AnsiballZ_dnf.py'
Jan 21 14:01:07 compute-0 sudo[217600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:07 compute-0 python3.9[217602]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 14:01:07 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:08 compute-0 ceph-mon[75031]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:09 compute-0 podman[217606]: 2026-01-21 14:01:09.347396211 +0000 UTC m=+0.061754707 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:01:09 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 14:01:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 14:01:10 compute-0 systemd[1]: Reloading.
Jan 21 14:01:10 compute-0 systemd-sysv-generator[217670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:01:10 compute-0 systemd-rc-local-generator[217667]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:01:10 compute-0 sudo[217676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:01:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 14:01:10 compute-0 sudo[217676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:10 compute-0 sudo[217676]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:10 compute-0 sudo[217704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:01:10 compute-0 sudo[217704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:01:11 compute-0 sudo[217704]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:01:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:01:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:01:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:01:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:01:11 compute-0 ceph-mon[75031]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:01:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:01:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:01:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:01:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:01:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:01:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:01:11 compute-0 sudo[217868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:01:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 14:01:11 compute-0 sudo[217868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 14:01:11 compute-0 sudo[217868]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:11 compute-0 systemd[1]: run-rf844a7a045794c09ba223cd02586527b.service: Deactivated successfully.
Jan 21 14:01:11 compute-0 sudo[217894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:01:11 compute-0 sudo[217894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:11 compute-0 podman[217930]: 2026-01-21 14:01:11.928542316 +0000 UTC m=+0.121553875 container create 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 14:01:11 compute-0 podman[217930]: 2026-01-21 14:01:11.835640532 +0000 UTC m=+0.028652121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:01:11 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:12 compute-0 sudo[217600]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:12 compute-0 sudo[218093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbtnwvrrjsqgodfcadwhyvfzaqdcajuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004072.225703-162-261532228801113/AnsiballZ_file.py'
Jan 21 14:01:12 compute-0 sudo[218093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:12 compute-0 python3.9[218095]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 14:01:12 compute-0 sudo[218093]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:13 compute-0 systemd[1]: Started libpod-conmon-060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956.scope.
Jan 21 14:01:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:01:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:01:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:01:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:01:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:01:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:01:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:01:13 compute-0 podman[217930]: 2026-01-21 14:01:13.212754291 +0000 UTC m=+1.405765890 container init 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 14:01:13 compute-0 podman[217930]: 2026-01-21 14:01:13.227265581 +0000 UTC m=+1.420277170 container start 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:01:13 compute-0 crazy_noyce[218175]: 167 167
Jan 21 14:01:13 compute-0 systemd[1]: libpod-060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956.scope: Deactivated successfully.
Jan 21 14:01:13 compute-0 sudo[218262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjcqzsvvojfrhvwnlddcaientlcsiyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004072.9261985-170-149203933449696/AnsiballZ_modprobe.py'
Jan 21 14:01:13 compute-0 sudo[218262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:13 compute-0 python3.9[218264]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 21 14:01:13 compute-0 sudo[218262]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:13 compute-0 podman[217930]: 2026-01-21 14:01:13.887614567 +0000 UTC m=+2.080626126 container attach 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:01:13 compute-0 podman[217930]: 2026-01-21 14:01:13.889233275 +0000 UTC m=+2.082244884 container died 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:01:13 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:14 compute-0 sudo[218418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpbmhkrcavhqpqtqcdabjdkdlnftsltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004073.8738213-178-71466403781782/AnsiballZ_stat.py'
Jan 21 14:01:14 compute-0 sudo[218418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:14 compute-0 ceph-mon[75031]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b503a0e98cca48a5ee24ebf83442940b0a1cab11e3fa9dd71ef41c591e0bd55-merged.mount: Deactivated successfully.
Jan 21 14:01:14 compute-0 python3.9[218420]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:01:14 compute-0 sudo[218418]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:14 compute-0 podman[217930]: 2026-01-21 14:01:14.652739234 +0000 UTC m=+2.845750783 container remove 060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_noyce, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 14:01:14 compute-0 systemd[1]: libpod-conmon-060abe66ea10cfc17196bf90b60791c11f2b7abfc3da9cbdfcb0ad7f1f95a956.scope: Deactivated successfully.
Jan 21 14:01:14 compute-0 sudo[218551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfbtioanmlcvsuxsfamvidmyicniyhsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004073.8738213-178-71466403781782/AnsiballZ_copy.py'
Jan 21 14:01:14 compute-0 sudo[218551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:14 compute-0 podman[218547]: 2026-01-21 14:01:14.834680492 +0000 UTC m=+0.025141736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:01:15 compute-0 python3.9[218562]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004073.8738213-178-71466403781782/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:15 compute-0 sudo[218551]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:15 compute-0 podman[218547]: 2026-01-21 14:01:15.080481324 +0000 UTC m=+0.270942558 container create 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 14:01:15 compute-0 systemd[1]: Started libpod-conmon-3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d.scope.
Jan 21 14:01:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:15 compute-0 sudo[218720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdsdgutlcykfzlhctwnzatbbgzuvlfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004075.2615254-194-141951362930758/AnsiballZ_lineinfile.py'
Jan 21 14:01:15 compute-0 sudo[218720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:15 compute-0 podman[218547]: 2026-01-21 14:01:15.613274943 +0000 UTC m=+0.803736177 container init 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:01:15 compute-0 ceph-mon[75031]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:15 compute-0 podman[218547]: 2026-01-21 14:01:15.624683887 +0000 UTC m=+0.815145121 container start 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:01:15 compute-0 python3.9[218722]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:15 compute-0 sudo[218720]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:15 compute-0 podman[218547]: 2026-01-21 14:01:15.805966168 +0000 UTC m=+0.996427402 container attach 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:01:15 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:16 compute-0 competent_pare[218592]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:01:16 compute-0 competent_pare[218592]: --> All data devices are unavailable
Jan 21 14:01:16 compute-0 systemd[1]: libpod-3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d.scope: Deactivated successfully.
Jan 21 14:01:16 compute-0 podman[218547]: 2026-01-21 14:01:16.13315646 +0000 UTC m=+1.323617704 container died 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:01:16 compute-0 sudo[218899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncchyqtgntjwedthjytesqmwyjgyuxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004075.9953277-202-277628585363711/AnsiballZ_systemd.py'
Jan 21 14:01:16 compute-0 sudo[218899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-94eb881c1d3248f6215b4767aa70ae1b6f182b5a3b4ad656ae3b651d6c2bb1d3-merged.mount: Deactivated successfully.
Jan 21 14:01:16 compute-0 podman[218547]: 2026-01-21 14:01:16.735216284 +0000 UTC m=+1.925677528 container remove 3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 21 14:01:16 compute-0 sudo[217894]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:16 compute-0 systemd[1]: libpod-conmon-3caa29070842f1402dcf73dd94b069e398a62152c7fa6ef558fc65818912954d.scope: Deactivated successfully.
Jan 21 14:01:16 compute-0 sudo[218902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:01:16 compute-0 sudo[218902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:16 compute-0 sudo[218902]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:16 compute-0 sudo[218927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:01:16 compute-0 sudo[218927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:16 compute-0 python3.9[218901]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 14:01:17 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 14:01:17 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 21 14:01:17 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 21 14:01:17 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 21 14:01:17 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 21 14:01:17 compute-0 sudo[218899]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.138076506 +0000 UTC m=+0.038279922 container create fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.120418831 +0000 UTC m=+0.020622267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:01:17 compute-0 systemd[1]: Started libpod-conmon-fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15.scope.
Jan 21 14:01:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.353298754 +0000 UTC m=+0.253502200 container init fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.360341703 +0000 UTC m=+0.260545119 container start fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.364498114 +0000 UTC m=+0.264701520 container attach fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 14:01:17 compute-0 nifty_bassi[219060]: 167 167
Jan 21 14:01:17 compute-0 systemd[1]: libpod-fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15.scope: Deactivated successfully.
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.366974872 +0000 UTC m=+0.267178288 container died fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28862fcb92af11911d6672c99077946be1f536e2bfbdf66e0a946fd511586a7-merged.mount: Deactivated successfully.
Jan 21 14:01:17 compute-0 podman[218969]: 2026-01-21 14:01:17.41675507 +0000 UTC m=+0.316958496 container remove fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:01:17 compute-0 systemd[1]: libpod-conmon-fe2d12d04c37c95bd2659953741284fdc7cb27aca628ef86579f6f7345cf9d15.scope: Deactivated successfully.
Jan 21 14:01:17 compute-0 sudo[219153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqnyehtvvkhbmcsuqqdcqdrbdqgkxnmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004077.2654166-210-277895908107031/AnsiballZ_command.py'
Jan 21 14:01:17 compute-0 sudo[219153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:17 compute-0 podman[219161]: 2026-01-21 14:01:17.597829946 +0000 UTC m=+0.043333823 container create 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 14:01:17 compute-0 systemd[1]: Started libpod-conmon-94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0.scope.
Jan 21 14:01:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:17 compute-0 podman[219161]: 2026-01-21 14:01:17.580375047 +0000 UTC m=+0.025878954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:01:17 compute-0 podman[219161]: 2026-01-21 14:01:17.677524604 +0000 UTC m=+0.123028521 container init 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 14:01:17 compute-0 podman[219161]: 2026-01-21 14:01:17.689453351 +0000 UTC m=+0.134957228 container start 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:01:17 compute-0 ceph-mon[75031]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:17 compute-0 podman[219161]: 2026-01-21 14:01:17.703399967 +0000 UTC m=+0.148903884 container attach 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:01:17 compute-0 python3.9[219156]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:01:17 compute-0 sudo[219153]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:17 compute-0 confident_swartz[219178]: {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:     "0": [
Jan 21 14:01:17 compute-0 confident_swartz[219178]:         {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "devices": [
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "/dev/loop3"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             ],
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_name": "ceph_lv0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_size": "21470642176",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "name": "ceph_lv0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "tags": {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cluster_name": "ceph",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.crush_device_class": "",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.encrypted": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.objectstore": "bluestore",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osd_id": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.type": "block",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.vdo": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.with_tpm": "0"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             },
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "type": "block",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "vg_name": "ceph_vg0"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:         }
Jan 21 14:01:17 compute-0 confident_swartz[219178]:     ],
Jan 21 14:01:17 compute-0 confident_swartz[219178]:     "1": [
Jan 21 14:01:17 compute-0 confident_swartz[219178]:         {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "devices": [
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "/dev/loop4"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             ],
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_name": "ceph_lv1",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_size": "21470642176",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "name": "ceph_lv1",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "tags": {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cluster_name": "ceph",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.crush_device_class": "",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.encrypted": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.objectstore": "bluestore",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osd_id": "1",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.type": "block",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.vdo": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.with_tpm": "0"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             },
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "type": "block",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "vg_name": "ceph_vg1"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:         }
Jan 21 14:01:17 compute-0 confident_swartz[219178]:     ],
Jan 21 14:01:17 compute-0 confident_swartz[219178]:     "2": [
Jan 21 14:01:17 compute-0 confident_swartz[219178]:         {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "devices": [
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "/dev/loop5"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             ],
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_name": "ceph_lv2",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_size": "21470642176",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "name": "ceph_lv2",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "tags": {
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.cluster_name": "ceph",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.crush_device_class": "",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.encrypted": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.objectstore": "bluestore",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osd_id": "2",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.type": "block",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.vdo": "0",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:                 "ceph.with_tpm": "0"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             },
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "type": "block",
Jan 21 14:01:17 compute-0 confident_swartz[219178]:             "vg_name": "ceph_vg2"
Jan 21 14:01:17 compute-0 confident_swartz[219178]:         }
Jan 21 14:01:17 compute-0 confident_swartz[219178]:     ]
Jan 21 14:01:17 compute-0 confident_swartz[219178]: }
Jan 21 14:01:17 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:17 compute-0 systemd[1]: libpod-94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0.scope: Deactivated successfully.
Jan 21 14:01:18 compute-0 podman[219161]: 2026-01-21 14:01:17.999899019 +0000 UTC m=+0.445402896 container died 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-743d20bf0c74755d8508925d1f7aff13aa9e730ae97d1916e0d9132493fd1ab9-merged.mount: Deactivated successfully.
Jan 21 14:01:18 compute-0 podman[219161]: 2026-01-21 14:01:18.102022156 +0000 UTC m=+0.547526033 container remove 94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 14:01:18 compute-0 systemd[1]: libpod-conmon-94cbc86fa5e983b26c8b3264e9e19b21c589d7378cff0f91e10ac73e157347d0.scope: Deactivated successfully.
Jan 21 14:01:18 compute-0 sudo[218927]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:18 compute-0 sudo[219280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:01:18 compute-0 sudo[219280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:18 compute-0 sudo[219280]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:18 compute-0 sudo[219328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:01:18 compute-0 sudo[219328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:18 compute-0 sudo[219399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raxgrkananhgqnrcjbhermosuvzviphc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004078.0544577-220-213683613471863/AnsiballZ_stat.py'
Jan 21 14:01:18 compute-0 sudo[219399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:18 compute-0 python3.9[219401]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:01:18 compute-0 sudo[219399]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.539389959 +0000 UTC m=+0.046176292 container create 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:01:18 compute-0 systemd[1]: Started libpod-conmon-99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814.scope.
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.515086143 +0000 UTC m=+0.021872496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:01:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.64417616 +0000 UTC m=+0.150962523 container init 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.657159132 +0000 UTC m=+0.163945475 container start 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.660918022 +0000 UTC m=+0.167704355 container attach 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 14:01:18 compute-0 nervous_aryabhata[219453]: 167 167
Jan 21 14:01:18 compute-0 systemd[1]: libpod-99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814.scope: Deactivated successfully.
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.663445423 +0000 UTC m=+0.170231756 container died 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 14:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-89de339261dbc5f2534d9e7f8ee84b177c7a9c8b5053fdab32c8b4286cc2c4f9-merged.mount: Deactivated successfully.
Jan 21 14:01:18 compute-0 podman[219413]: 2026-01-21 14:01:18.715790833 +0000 UTC m=+0.222577206 container remove 99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 14:01:18 compute-0 systemd[1]: libpod-conmon-99b9ec38df66839c4780e4df6db204e0b9509b12e54d63ddebf8acf61a8d9814.scope: Deactivated successfully.
Jan 21 14:01:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:18 compute-0 podman[219535]: 2026-01-21 14:01:18.891453208 +0000 UTC m=+0.045682689 container create 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:01:18 compute-0 systemd[1]: Started libpod-conmon-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope.
Jan 21 14:01:18 compute-0 podman[219535]: 2026-01-21 14:01:18.869804738 +0000 UTC m=+0.024034229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:01:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:01:18 compute-0 podman[219535]: 2026-01-21 14:01:18.983722238 +0000 UTC m=+0.137951719 container init 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 14:01:18 compute-0 podman[219535]: 2026-01-21 14:01:18.991360971 +0000 UTC m=+0.145590422 container start 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 14:01:18 compute-0 podman[219535]: 2026-01-21 14:01:18.995499401 +0000 UTC m=+0.149728862 container attach 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 14:01:19 compute-0 sudo[219622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-telelrlncmmtdnwfgptpwtubmowgosxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004078.7337143-229-55553962586106/AnsiballZ_stat.py'
Jan 21 14:01:19 compute-0 sudo[219622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:19 compute-0 python3.9[219625]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:01:19 compute-0 sudo[219622]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:19 compute-0 sudo[219800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctcpmntztnpkrglibmkgrrripyubfzzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004078.7337143-229-55553962586106/AnsiballZ_copy.py'
Jan 21 14:01:19 compute-0 sudo[219800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:19 compute-0 lvm[219822]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:01:19 compute-0 lvm[219822]: VG ceph_vg0 finished
Jan 21 14:01:19 compute-0 lvm[219823]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:01:19 compute-0 lvm[219823]: VG ceph_vg1 finished
Jan 21 14:01:19 compute-0 lvm[219825]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:01:19 compute-0 lvm[219825]: VG ceph_vg2 finished
Jan 21 14:01:19 compute-0 ceph-mon[75031]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:19 compute-0 python3.9[219807]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004078.7337143-229-55553962586106/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:19 compute-0 cool_tesla[219592]: {}
Jan 21 14:01:19 compute-0 sudo[219800]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:19 compute-0 systemd[1]: libpod-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope: Deactivated successfully.
Jan 21 14:01:19 compute-0 podman[219535]: 2026-01-21 14:01:19.782570017 +0000 UTC m=+0.936799488 container died 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:01:19 compute-0 systemd[1]: libpod-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope: Consumed 1.308s CPU time.
Jan 21 14:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7936bbb628ef5aa00105e804c97196c32ca345c6f951dfb27f022a69bb814c7-merged.mount: Deactivated successfully.
Jan 21 14:01:19 compute-0 podman[219535]: 2026-01-21 14:01:19.82553517 +0000 UTC m=+0.979764631 container remove 4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_tesla, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:01:19 compute-0 systemd[1]: libpod-conmon-4d8168905fdfe4c49f4f7df0a702b0ccbfe46a59bce16df27237de0ee0bf5a42.scope: Deactivated successfully.
Jan 21 14:01:19 compute-0 sudo[219328]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:01:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:01:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:01:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:01:19 compute-0 sudo[219897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:01:19 compute-0 sudo[219897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:01:19 compute-0 sudo[219897]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:19 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:20 compute-0 sudo[220012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cruyyferxysyicysiuircsmkimdiilmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004079.909272-244-189721834046466/AnsiballZ_command.py'
Jan 21 14:01:20 compute-0 sudo[220012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:20 compute-0 python3.9[220014]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:01:20 compute-0 sudo[220012]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:20 compute-0 sudo[220165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mewlkqydxohujvcxleybgkesfsxpztnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004080.5076222-252-152121897524916/AnsiballZ_lineinfile.py'
Jan 21 14:01:20 compute-0 sudo[220165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:01:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:01:20 compute-0 ceph-mon[75031]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:20 compute-0 python3.9[220167]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:20 compute-0 sudo[220165]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:21 compute-0 sudo[220317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwhgfbfxpzgodnalfprftpcwcctnsle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004081.1362336-260-3507648586390/AnsiballZ_replace.py'
Jan 21 14:01:21 compute-0 sudo[220317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:21 compute-0 python3.9[220319]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:21 compute-0 sudo[220317]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:21 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:22 compute-0 sudo[220469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlkxizmzehedayddcekeugbbcgrwogv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004081.9099016-268-67627205225357/AnsiballZ_replace.py'
Jan 21 14:01:22 compute-0 sudo[220469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:22 compute-0 python3.9[220471]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:22 compute-0 sudo[220469]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:23 compute-0 sudo[220621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzhssmwxtdtdautzwescvynoptbqrpdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004082.7357206-277-73720649319550/AnsiballZ_lineinfile.py'
Jan 21 14:01:23 compute-0 sudo[220621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:23 compute-0 ceph-mon[75031]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:23 compute-0 python3.9[220623]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:23 compute-0 sudo[220621]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:23 compute-0 sudo[220773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxjhzzwvulczijtyqkqicqykgqaoltd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004083.3852618-277-257256912436688/AnsiballZ_lineinfile.py'
Jan 21 14:01:23 compute-0 sudo[220773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:23 compute-0 python3.9[220775]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:23 compute-0 sudo[220773]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:23 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:24 compute-0 sudo[220925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfpgfvduhbzuozyjhskdlgygsgdwdkhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004083.9380078-277-133986537530630/AnsiballZ_lineinfile.py'
Jan 21 14:01:24 compute-0 sudo[220925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:24 compute-0 python3.9[220927]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:24 compute-0 sudo[220925]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:24 compute-0 sudo[221077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihopjmucalcwqtgjqhcnyvkpgubjeut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004084.4645252-277-40833116679078/AnsiballZ_lineinfile.py'
Jan 21 14:01:24 compute-0 sudo[221077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:24 compute-0 python3.9[221079]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:24 compute-0 sudo[221077]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:25 compute-0 sudo[221229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ierfoktzgrczbqznzscjhmhpsdwjyqfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004085.0485158-306-136462516614746/AnsiballZ_stat.py'
Jan 21 14:01:25 compute-0 sudo[221229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:25 compute-0 python3.9[221231]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:01:25 compute-0 sudo[221229]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:25 compute-0 ceph-mon[75031]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:25 compute-0 sudo[221383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsjytwptrfrdwyrmnzxfkawhpvxtdrwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004085.634566-314-81358921788604/AnsiballZ_command.py'
Jan 21 14:01:25 compute-0 sudo[221383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:25 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:26 compute-0 python3.9[221385]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:01:26 compute-0 sudo[221383]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:26 compute-0 sudo[221536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-filulxxdfajaryxyvsrvojegeujzmkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004086.2736235-323-221657586791351/AnsiballZ_systemd_service.py'
Jan 21 14:01:26 compute-0 sudo[221536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:26 compute-0 python3.9[221538]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:26 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 21 14:01:26 compute-0 sudo[221536]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:27 compute-0 sudo[221692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dffehouhnvvorprqxvzazmisyesezmzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004087.0488105-331-104588028342170/AnsiballZ_systemd_service.py'
Jan 21 14:01:27 compute-0 sudo[221692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:27 compute-0 python3.9[221694]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:27 compute-0 ceph-mon[75031]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:27 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:28 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 21 14:01:28 compute-0 udevadm[221699]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 21 14:01:28 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 21 14:01:28 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 14:01:28 compute-0 multipathd[221703]: --------start up--------
Jan 21 14:01:28 compute-0 multipathd[221703]: read /etc/multipath.conf
Jan 21 14:01:28 compute-0 multipathd[221703]: path checkers start up
Jan 21 14:01:29 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 14:01:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:29 compute-0 sudo[221692]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:29 compute-0 ceph-mon[75031]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:29 compute-0 sudo[221860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eufxoipuzffzjolttigkqapmazqqfieh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004089.3977475-343-210806296718924/AnsiballZ_file.py'
Jan 21 14:01:29 compute-0 sudo[221860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:29 compute-0 python3.9[221862]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 21 14:01:29 compute-0 sudo[221860]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:29 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:30 compute-0 sudo[222012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvyorblndauqboszlwknuokefmdmojrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004090.0594456-351-102513050525569/AnsiballZ_modprobe.py'
Jan 21 14:01:30 compute-0 sudo[222012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:30 compute-0 python3.9[222014]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 21 14:01:30 compute-0 kernel: Key type psk registered
Jan 21 14:01:30 compute-0 sudo[222012]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:31 compute-0 sudo[222175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcwuvtqwwvlmrstuoeyymojmypmzncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004090.7469246-359-189038850835010/AnsiballZ_stat.py'
Jan 21 14:01:31 compute-0 sudo[222175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:31 compute-0 python3.9[222177]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:01:31 compute-0 sudo[222175]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:31 compute-0 sudo[222298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setwmjcgddpbgvaavhrgjpnsdiabdkrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004090.7469246-359-189038850835010/AnsiballZ_copy.py'
Jan 21 14:01:31 compute-0 sudo[222298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:31 compute-0 ceph-mon[75031]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:31 compute-0 python3.9[222300]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769004090.7469246-359-189038850835010/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:31 compute-0 sudo[222298]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:31 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:32 compute-0 sudo[222450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmoewufpbsaguaaibschbsixefmogmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004092.0675695-375-175904931127127/AnsiballZ_lineinfile.py'
Jan 21 14:01:32 compute-0 sudo[222450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:32 compute-0 python3.9[222452]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:32 compute-0 sudo[222450]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:33 compute-0 sudo[222612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrgmbjdtauobhfgmiziiigadyjynjavs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004092.7555292-383-195090752607124/AnsiballZ_systemd.py'
Jan 21 14:01:33 compute-0 sudo[222612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:33 compute-0 podman[222576]: 2026-01-21 14:01:33.205092092 +0000 UTC m=+0.137619022 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 14:01:33 compute-0 python3.9[222618]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 14:01:33 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 21 14:01:33 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 21 14:01:33 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 21 14:01:33 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 21 14:01:33 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 21 14:01:33 compute-0 sudo[222612]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:01:33.891 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:01:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:01:33.891 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:01:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:01:33.892 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:01:33 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 21 14:01:34 compute-0 sudo[222781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltavgomebslggyfnzczrmtcjbtfaskyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004093.7312622-391-189393843056252/AnsiballZ_dnf.py'
Jan 21 14:01:34 compute-0 sudo[222781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:34 compute-0 python3.9[222783]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 21 14:01:34 compute-0 ceph-mon[75031]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:35 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:37 compute-0 ceph-mon[75031]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 21 14:01:37 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:01:39
Jan 21 14:01:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:01:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:01:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'default.rgw.control']
Jan 21 14:01:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:01:39 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:40 compute-0 podman[222788]: 2026-01-21 14:01:40.396194026 +0000 UTC m=+0.112394924 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:01:41 compute-0 systemd[1]: Reloading.
Jan 21 14:01:41 compute-0 systemd-rc-local-generator[222833]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:01:41 compute-0 systemd-sysv-generator[222837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:01:41 compute-0 ceph-mon[75031]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:41 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:42 compute-0 systemd[1]: Reloading.
Jan 21 14:01:42 compute-0 systemd-sysv-generator[222875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:01:42 compute-0 systemd-rc-local-generator[222872]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:01:42 compute-0 systemd-logind[780]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 21 14:01:42 compute-0 lvm[222917]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:01:42 compute-0 lvm[222917]: VG ceph_vg2 finished
Jan 21 14:01:42 compute-0 lvm[222915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:01:42 compute-0 lvm[222915]: VG ceph_vg0 finished
Jan 21 14:01:42 compute-0 lvm[222918]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:01:42 compute-0 lvm[222918]: VG ceph_vg1 finished
Jan 21 14:01:42 compute-0 systemd-logind[780]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 21 14:01:43 compute-0 ceph-mon[75031]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:43 compute-0 ceph-mon[75031]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 21 14:01:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 21 14:01:43 compute-0 systemd[1]: Reloading.
Jan 21 14:01:43 compute-0 systemd-sysv-generator[222976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:01:43 compute-0 systemd-rc-local-generator[222973]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:01:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 21 14:01:43 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:45 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 21 14:01:46 compute-0 ceph-mon[75031]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:47 compute-0 ceph-mon[75031]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 21 14:01:47 compute-0 ceph-mon[75031]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 21 14:01:47 compute-0 sudo[222781]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:47 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:48 compute-0 sudo[224272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblbngyykkzsdixrjismxvbrsqrublez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004107.9436464-399-281169827860763/AnsiballZ_systemd_service.py'
Jan 21 14:01:48 compute-0 sudo[224272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:48 compute-0 python3.9[224274]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 14:01:48 compute-0 iscsid[217126]: iscsid shutting down.
Jan 21 14:01:48 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 21 14:01:48 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 21 14:01:48 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 21 14:01:48 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 21 14:01:48 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 21 14:01:48 compute-0 systemd[1]: Started Open-iSCSI.
Jan 21 14:01:48 compute-0 sudo[224272]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:48 compute-0 ceph-mon[75031]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 21 14:01:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 21 14:01:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.688s CPU time.
Jan 21 14:01:48 compute-0 systemd[1]: run-r7ae700fde18a45f08d176fc9364d6fc6.service: Deactivated successfully.
Jan 21 14:01:49 compute-0 sudo[224429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yitkqwtjdasmymayqiscabhhmjxsonew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004108.8135917-407-178369020715266/AnsiballZ_systemd_service.py'
Jan 21 14:01:49 compute-0 sudo[224429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:49 compute-0 python3.9[224431]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 14:01:49 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 21 14:01:49 compute-0 multipathd[221703]: exit (signal)
Jan 21 14:01:49 compute-0 multipathd[221703]: --------shut down-------
Jan 21 14:01:49 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 21 14:01:49 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 21 14:01:49 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 21 14:01:49 compute-0 multipathd[224438]: --------start up--------
Jan 21 14:01:49 compute-0 multipathd[224438]: read /etc/multipath.conf
Jan 21 14:01:49 compute-0 multipathd[224438]: path checkers start up
Jan 21 14:01:49 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 21 14:01:49 compute-0 sudo[224429]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:49 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 21 14:01:49 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:50 compute-0 python3.9[224596]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 21 14:01:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:01:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:01:51 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 21 14:01:51 compute-0 sudo[224751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxdanhczqybsanicbywichflgtczwzjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004110.7946143-425-277862464651830/AnsiballZ_file.py'
Jan 21 14:01:51 compute-0 sudo[224751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:51 compute-0 python3.9[224753]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:01:51 compute-0 sudo[224751]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:51 compute-0 ceph-mon[75031]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:51 compute-0 sudo[224903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njimyrcanqfdvwqfeuguokmfpsffrfhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004111.5765476-436-259247603197439/AnsiballZ_systemd_service.py'
Jan 21 14:01:51 compute-0 sudo[224903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:52 compute-0 python3.9[224905]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 14:01:52 compute-0 systemd[1]: Reloading.
Jan 21 14:01:52 compute-0 systemd-rc-local-generator[224933]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:01:52 compute-0 systemd-sysv-generator[224936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:01:52 compute-0 sudo[224903]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:53 compute-0 python3.9[225090]: ansible-ansible.builtin.service_facts Invoked
Jan 21 14:01:53 compute-0 network[225107]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 21 14:01:53 compute-0 network[225108]: 'network-scripts' will be removed from distribution in near future.
Jan 21 14:01:53 compute-0 network[225109]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 21 14:01:53 compute-0 ceph-mon[75031]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:54 compute-0 ceph-mon[75031]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:01:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:57 compute-0 ceph-mon[75031]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:57 compute-0 sudo[225380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqqfulrjnwifewpngwauzghxyawdfwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004117.4381707-455-18452121760627/AnsiballZ_systemd_service.py'
Jan 21 14:01:57 compute-0 sudo[225380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:01:58 compute-0 python3.9[225382]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:58 compute-0 sudo[225380]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:58 compute-0 sudo[225533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcddelepyxuijjilemwaodrlzqxpezjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004118.211879-455-147574234506420/AnsiballZ_systemd_service.py'
Jan 21 14:01:58 compute-0 sudo[225533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:58 compute-0 python3.9[225535]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:58 compute-0 sudo[225533]: pam_unix(sudo:session): session closed for user root
Jan 21 14:01:59 compute-0 sudo[225686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gidbonngdfvmwrkfjwdmqzciuscxhcud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004118.9852116-455-220917239855496/AnsiballZ_systemd_service.py'
Jan 21 14:01:59 compute-0 sudo[225686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:01:59 compute-0 python3.9[225688]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:01:59 compute-0 sudo[225686]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:00 compute-0 ceph-mon[75031]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:00 compute-0 sudo[225839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mynytbbolfceezwvetxbldxpturpptwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004119.770288-455-251270563520947/AnsiballZ_systemd_service.py'
Jan 21 14:02:00 compute-0 sudo[225839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:00 compute-0 python3.9[225841]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:02:00 compute-0 sudo[225839]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:00 compute-0 sudo[225992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmfnmbrkqrfgfsusjxbkrtrvsphnlhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004120.5536273-455-171983166754466/AnsiballZ_systemd_service.py'
Jan 21 14:02:00 compute-0 sudo[225992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:01 compute-0 python3.9[225994]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:02:01 compute-0 sudo[225992]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:01 compute-0 ceph-mon[75031]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:01 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 21 14:02:01 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 21 14:02:01 compute-0 sudo[226147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnecchjokkyxuyueabktssxdfnaddwjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004121.475258-455-135705610552652/AnsiballZ_systemd_service.py'
Jan 21 14:02:01 compute-0 sudo[226147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:02 compute-0 python3.9[226149]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:02:02 compute-0 sudo[226147]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:02 compute-0 sudo[226300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phyazrfksegzotdpohujfdrntbqlgsvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004122.2441752-455-40995032914009/AnsiballZ_systemd_service.py'
Jan 21 14:02:02 compute-0 sudo[226300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:02 compute-0 python3.9[226302]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:02:02 compute-0 sudo[226300]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:03 compute-0 sudo[226468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crcssonemhscslpajqdknkxblkzohasx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004123.0098143-455-113447232361860/AnsiballZ_systemd_service.py'
Jan 21 14:02:03 compute-0 sudo[226468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:03 compute-0 podman[226427]: 2026-01-21 14:02:03.369871265 +0000 UTC m=+0.116787350 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:02:03 compute-0 ceph-mon[75031]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:03 compute-0 python3.9[226475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:02:03 compute-0 sudo[226468]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:04 compute-0 sudo[226632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqelpzuwpcfwoijseeundgdgkvrdsiue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004124.0186117-514-71022011992941/AnsiballZ_file.py'
Jan 21 14:02:04 compute-0 sudo[226632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:04 compute-0 python3.9[226634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:04 compute-0 sudo[226632]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:04 compute-0 sudo[226784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddrzsfosffumwlusoaasvmrpfseydmdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004124.6298463-514-92744832708719/AnsiballZ_file.py'
Jan 21 14:02:04 compute-0 sudo[226784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:05 compute-0 python3.9[226786]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:05 compute-0 sudo[226784]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:05 compute-0 sudo[226936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztpaznemebqhlokjxvwkciqkevllnmks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004125.1908474-514-78094081098149/AnsiballZ_file.py'
Jan 21 14:02:05 compute-0 sudo[226936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:05 compute-0 python3.9[226938]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:05 compute-0 sudo[226936]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:05 compute-0 ceph-mon[75031]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:06 compute-0 sudo[227088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubvvhelpbywacbcqxszmottibcbquuxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004125.7741742-514-223941248917540/AnsiballZ_file.py'
Jan 21 14:02:06 compute-0 sudo[227088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.070486) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126070519, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1927, "num_deletes": 253, "total_data_size": 3337846, "memory_usage": 3380184, "flush_reason": "Manual Compaction"}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126160936, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1869755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11749, "largest_seqno": 13675, "table_properties": {"data_size": 1863526, "index_size": 3176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15532, "raw_average_key_size": 20, "raw_value_size": 1849779, "raw_average_value_size": 2402, "num_data_blocks": 147, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003895, "oldest_key_time": 1769003895, "file_creation_time": 1769004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 90531 microseconds, and 5305 cpu microseconds.
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.161014) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1869755 bytes OK
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.161034) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.168642) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.168881) EVENT_LOG_v1 {"time_micros": 1769004126168871, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.168907) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3329744, prev total WAL file size 3329744, number of live WAL files 2.
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.169857) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1825KB)], [29(7893KB)]
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126169926, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9952989, "oldest_snapshot_seqno": -1}
Jan 21 14:02:06 compute-0 python3.9[227090]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:06 compute-0 sudo[227088]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4063 keys, 7982506 bytes, temperature: kUnknown
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126241971, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7982506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7953421, "index_size": 17839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96446, "raw_average_key_size": 23, "raw_value_size": 7878289, "raw_average_value_size": 1939, "num_data_blocks": 776, "num_entries": 4063, "num_filter_entries": 4063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.242197) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7982506 bytes
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.363783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.0 rd, 110.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.7 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(9.6) write-amplify(4.3) OK, records in: 4475, records dropped: 412 output_compression: NoCompression
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.363823) EVENT_LOG_v1 {"time_micros": 1769004126363806, "job": 12, "event": "compaction_finished", "compaction_time_micros": 72135, "compaction_time_cpu_micros": 17922, "output_level": 6, "num_output_files": 1, "total_output_size": 7982506, "num_input_records": 4475, "num_output_records": 4063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126364860, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004126366439, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.169703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:02:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:02:06.366545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:02:06 compute-0 sudo[227240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koewsfkvhkdprifldeopejfrwsbkdcia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004126.3462121-514-118856880825658/AnsiballZ_file.py'
Jan 21 14:02:06 compute-0 sudo[227240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:06 compute-0 python3.9[227242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:06 compute-0 sudo[227240]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:07 compute-0 ceph-mon[75031]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:07 compute-0 sudo[227392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmaqqyegjgnlqlbnkhkoflgivlibtacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004126.937987-514-172963339113202/AnsiballZ_file.py'
Jan 21 14:02:07 compute-0 sudo[227392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:07 compute-0 python3.9[227394]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:07 compute-0 sudo[227392]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:07 compute-0 sudo[227544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqvlkytuqfktowksshbkunbtqpfbjkno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004127.580602-514-163219739792509/AnsiballZ_file.py'
Jan 21 14:02:07 compute-0 sudo[227544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:08 compute-0 python3.9[227546]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:08 compute-0 sudo[227544]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:08 compute-0 sudo[227696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gddgdoygalzmuxqwpqllellcvdrdoxiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004128.200619-514-38540652713385/AnsiballZ_file.py'
Jan 21 14:02:08 compute-0 sudo[227696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:08 compute-0 python3.9[227698]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:08 compute-0 sudo[227696]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:09 compute-0 sudo[227848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjnndxxbhssyamzmsbhyxsrczklgdvfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004128.8541398-571-120781193240418/AnsiballZ_file.py'
Jan 21 14:02:09 compute-0 sudo[227848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:09 compute-0 python3.9[227850]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:09 compute-0 sudo[227848]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:09 compute-0 ceph-mon[75031]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:09 compute-0 sudo[228000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eksanjmgtvhkfnzrocbwggkbrzntspfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004129.4398296-571-168572632594124/AnsiballZ_file.py'
Jan 21 14:02:09 compute-0 sudo[228000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:09 compute-0 python3.9[228002]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:09 compute-0 sudo[228000]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:10 compute-0 sudo[228152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxgfozsdzdgnjiijjdilyohkvzhothfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004130.0576417-571-55095840192951/AnsiballZ_file.py'
Jan 21 14:02:10 compute-0 sudo[228152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:10 compute-0 python3.9[228154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:10 compute-0 sudo[228152]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:10 compute-0 sudo[228319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbvyeveurlepvhjdpslghhgtkwjzfkkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004130.72116-571-160657375672142/AnsiballZ_file.py'
Jan 21 14:02:11 compute-0 sudo[228319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:11 compute-0 podman[228278]: 2026-01-21 14:02:11.003372203 +0000 UTC m=+0.052394365 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 21 14:02:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:02:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:02:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:02:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:02:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:02:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:02:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:11 compute-0 python3.9[228323]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:11 compute-0 sudo[228319]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:11 compute-0 ceph-mon[75031]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:11 compute-0 sudo[228474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwaddnrdhvxtlfqibyqekfyjqdopfhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004131.351034-571-39266319801561/AnsiballZ_file.py'
Jan 21 14:02:11 compute-0 sudo[228474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:11 compute-0 python3.9[228476]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:11 compute-0 sudo[228474]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:12 compute-0 sudo[228626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skzwutxtsnfiasluyervzpxaajmvcmdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004131.9514768-571-6339664361097/AnsiballZ_file.py'
Jan 21 14:02:12 compute-0 sudo[228626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:12 compute-0 python3.9[228628]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:12 compute-0 sudo[228626]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:12 compute-0 sudo[228778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckyppejqonfrzkvsphwuhzhszzhuatbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004132.5996988-571-212489309105253/AnsiballZ_file.py'
Jan 21 14:02:12 compute-0 sudo[228778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:13 compute-0 python3.9[228780]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:13 compute-0 sudo[228778]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:13 compute-0 sudo[228930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fumtffyaqtauhjatnhfssruefxfjekzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004133.1823702-571-221286214648849/AnsiballZ_file.py'
Jan 21 14:02:13 compute-0 sudo[228930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:13 compute-0 python3.9[228932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:13 compute-0 sudo[228930]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:13 compute-0 ceph-mon[75031]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:14 compute-0 sudo[229082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buefwfgsrjvsjyzjzowjmaskcnfsxhmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004133.8540566-629-57974773395515/AnsiballZ_command.py'
Jan 21 14:02:14 compute-0 sudo[229082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:14 compute-0 python3.9[229084]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:14 compute-0 sudo[229082]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:14 compute-0 ceph-mon[75031]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:15 compute-0 python3.9[229236]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 21 14:02:15 compute-0 sudo[229386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmmwrrkfojoxmyfxudmnbvmycslyybxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004135.4310224-647-97523524180657/AnsiballZ_systemd_service.py'
Jan 21 14:02:15 compute-0 sudo[229386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:16 compute-0 python3.9[229388]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 14:02:16 compute-0 systemd[1]: Reloading.
Jan 21 14:02:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:16 compute-0 systemd-rc-local-generator[229416]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:02:16 compute-0 systemd-sysv-generator[229419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:02:16 compute-0 sudo[229386]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:16 compute-0 sudo[229573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uispsvgrroqdjgiqmpnbpvhbmnsmtlez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004136.585867-655-248903663691037/AnsiballZ_command.py'
Jan 21 14:02:16 compute-0 sudo[229573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:17 compute-0 python3.9[229575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:17 compute-0 ceph-mon[75031]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:18 compute-0 sudo[229573]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:18 compute-0 sudo[229726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bntwmqrfdcikbremfhsegsxqsqbocvkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004138.197159-655-5289345900739/AnsiballZ_command.py'
Jan 21 14:02:18 compute-0 sudo[229726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:18 compute-0 python3.9[229728]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:18 compute-0 sudo[229726]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:19 compute-0 sudo[229879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczplvquzvbazacesdpxncbosfafrzpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004138.8220396-655-97961774089543/AnsiballZ_command.py'
Jan 21 14:02:19 compute-0 sudo[229879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:19 compute-0 ceph-mon[75031]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:19 compute-0 python3.9[229881]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:19 compute-0 sudo[229879]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:19 compute-0 sudo[230032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucwgktjkqkdokjduyfuvvwgragkqufg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004139.4217312-655-93910082492159/AnsiballZ_command.py'
Jan 21 14:02:19 compute-0 sudo[230032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:19 compute-0 python3.9[230034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:19 compute-0 sudo[230032]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:20 compute-0 sudo[230040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:02:20 compute-0 sudo[230040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:20 compute-0 sudo[230040]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:20 compute-0 sudo[230089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:02:20 compute-0 sudo[230089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:20 compute-0 sudo[230248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpvdpvctssucljmkhjdysjftvafmwulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004140.1126301-655-268795014415341/AnsiballZ_command.py'
Jan 21 14:02:20 compute-0 sudo[230248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:20 compute-0 python3.9[230250]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:20 compute-0 sudo[230248]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:20 compute-0 sudo[230089]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:02:20 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:02:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:02:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:02:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:02:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:02:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:02:20 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:02:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:02:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:02:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:02:20 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:02:21 compute-0 sudo[230395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:02:21 compute-0 sudo[230395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:21 compute-0 sudo[230395]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:21 compute-0 sudo[230441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujhbqyvdrvhhpmwrxvlbsyaftwrhflia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004140.748331-655-78641713792431/AnsiballZ_command.py'
Jan 21 14:02:21 compute-0 sudo[230441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:21 compute-0 sudo[230446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:02:21 compute-0 sudo[230446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:21 compute-0 python3.9[230445]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:21 compute-0 sudo[230441]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:21 compute-0 podman[230507]: 2026-01-21 14:02:21.336644316 +0000 UTC m=+0.026588943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:02:21 compute-0 ceph-mon[75031]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:02:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:02:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:02:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:02:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:02:21 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:02:21 compute-0 sudo[230647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkqbhzkwmmgphnsxnjqkrdxqbqvzjvlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004141.3490393-655-140098834876341/AnsiballZ_command.py'
Jan 21 14:02:21 compute-0 podman[230507]: 2026-01-21 14:02:21.634994237 +0000 UTC m=+0.324938834 container create c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 14:02:21 compute-0 sudo[230647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:21 compute-0 systemd[1]: Started libpod-conmon-c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07.scope.
Jan 21 14:02:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:02:21 compute-0 python3.9[230649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:21 compute-0 sudo[230647]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:22 compute-0 podman[230507]: 2026-01-21 14:02:22.074157998 +0000 UTC m=+0.764102635 container init c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:02:22 compute-0 podman[230507]: 2026-01-21 14:02:22.083518494 +0000 UTC m=+0.773463111 container start c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 14:02:22 compute-0 awesome_meninsky[230652]: 167 167
Jan 21 14:02:22 compute-0 systemd[1]: libpod-c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07.scope: Deactivated successfully.
Jan 21 14:02:22 compute-0 podman[230507]: 2026-01-21 14:02:22.103792413 +0000 UTC m=+0.793737020 container attach c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:02:22 compute-0 podman[230507]: 2026-01-21 14:02:22.104692255 +0000 UTC m=+0.794636862 container died c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-41b2384991e6da3acd3396bf2857b8bbaeb7d1290cbc4d22158693246c746866-merged.mount: Deactivated successfully.
Jan 21 14:02:22 compute-0 podman[230507]: 2026-01-21 14:02:22.432770904 +0000 UTC m=+1.122715501 container remove c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:02:22 compute-0 sudo[230819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obtonacyppudqjalyhynjirsthhvgqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004142.1423697-655-28843443475980/AnsiballZ_command.py'
Jan 21 14:02:22 compute-0 sudo[230819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:22 compute-0 systemd[1]: libpod-conmon-c8d5ff976d0686dd0a931df7c12a74720c82ae0b6e4a55f7073c87a9f7f86f07.scope: Deactivated successfully.
Jan 21 14:02:22 compute-0 python3.9[230822]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 21 14:02:22 compute-0 sudo[230819]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:22 compute-0 podman[230829]: 2026-01-21 14:02:22.598884853 +0000 UTC m=+0.043001069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:02:22 compute-0 podman[230829]: 2026-01-21 14:02:22.836210561 +0000 UTC m=+0.280326747 container create 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:02:22 compute-0 systemd[1]: Started libpod-conmon-485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15.scope.
Jan 21 14:02:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:23 compute-0 ceph-mon[75031]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:23 compute-0 podman[230829]: 2026-01-21 14:02:23.11330345 +0000 UTC m=+0.557419656 container init 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 21 14:02:23 compute-0 podman[230829]: 2026-01-21 14:02:23.120496763 +0000 UTC m=+0.564612949 container start 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:02:23 compute-0 podman[230829]: 2026-01-21 14:02:23.17296647 +0000 UTC m=+0.617082666 container attach 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 14:02:23 compute-0 inspiring_wright[230870]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:02:23 compute-0 inspiring_wright[230870]: --> All data devices are unavailable
Jan 21 14:02:23 compute-0 systemd[1]: libpod-485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15.scope: Deactivated successfully.
Jan 21 14:02:23 compute-0 podman[230829]: 2026-01-21 14:02:23.614823184 +0000 UTC m=+1.058939380 container died 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 14:02:23 compute-0 sudo[231027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzqmltjwihhgrclnvgvaurlqgtjtmzyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004143.60072-734-144913692416408/AnsiballZ_file.py'
Jan 21 14:02:23 compute-0 sudo[231027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:24 compute-0 python3.9[231029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:24 compute-0 sudo[231027]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:24 compute-0 sudo[231179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ontnanmcxlqbejrzlzcllgccdclhliwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004144.2235236-734-37730040091166/AnsiballZ_file.py'
Jan 21 14:02:24 compute-0 sudo[231179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:24 compute-0 python3.9[231181]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:24 compute-0 sudo[231179]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:25 compute-0 sudo[231332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvidortczlmsjyezlnmetwctarmvoyyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004144.9377413-734-159643781537379/AnsiballZ_file.py'
Jan 21 14:02:25 compute-0 sudo[231332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:25 compute-0 python3.9[231334]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4d9db48f32efe39276661b98a00f3f53c045e72cb0fd37900a4ec02ad634cf0-merged.mount: Deactivated successfully.
Jan 21 14:02:25 compute-0 sudo[231332]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:25 compute-0 sudo[231484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpiejedxivuighbosxxlcxlckvkkugnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004145.6334248-756-140570348781448/AnsiballZ_file.py'
Jan 21 14:02:25 compute-0 sudo[231484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:26 compute-0 python3.9[231486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:26 compute-0 sudo[231484]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:26 compute-0 sudo[231636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixlguplssaonfcillcgrdkazhcodapyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004146.291568-756-119250183216177/AnsiballZ_file.py'
Jan 21 14:02:26 compute-0 sudo[231636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:26 compute-0 python3.9[231638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:26 compute-0 sudo[231636]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:27 compute-0 ceph-mon[75031]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:27 compute-0 sudo[231788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkliadrqoqrzpceukvriazkfxxnwpocx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004147.0732257-756-278721288981804/AnsiballZ_file.py'
Jan 21 14:02:27 compute-0 sudo[231788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:27 compute-0 podman[230829]: 2026-01-21 14:02:27.358140116 +0000 UTC m=+4.802256342 container remove 485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 14:02:27 compute-0 systemd[1]: libpod-conmon-485cc9b47ca8c166c05b70691f8f8b97cac639eae38aa10733b6befd180dfe15.scope: Deactivated successfully.
Jan 21 14:02:27 compute-0 sudo[230446]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:27 compute-0 sudo[231791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:02:27 compute-0 sudo[231791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:27 compute-0 sudo[231791]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:27 compute-0 sudo[231816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:02:27 compute-0 sudo[231816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:27 compute-0 python3.9[231790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:27 compute-0 sudo[231788]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:27 compute-0 podman[231929]: 2026-01-21 14:02:27.811682313 +0000 UTC m=+0.024730528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:02:27 compute-0 podman[231929]: 2026-01-21 14:02:27.94699743 +0000 UTC m=+0.160045625 container create 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:02:27 compute-0 systemd[1]: Started libpod-conmon-371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947.scope.
Jan 21 14:02:28 compute-0 sudo[232017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-divucmppdnoeinmkummtocxdbzivdyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004147.725133-756-95571544297665/AnsiballZ_file.py'
Jan 21 14:02:28 compute-0 sudo[232017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:02:28 compute-0 podman[231929]: 2026-01-21 14:02:28.064818034 +0000 UTC m=+0.277866249 container init 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 14:02:28 compute-0 podman[231929]: 2026-01-21 14:02:28.072063528 +0000 UTC m=+0.285111723 container start 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:02:28 compute-0 stoic_moser[232021]: 167 167
Jan 21 14:02:28 compute-0 systemd[1]: libpod-371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947.scope: Deactivated successfully.
Jan 21 14:02:28 compute-0 podman[231929]: 2026-01-21 14:02:28.129872553 +0000 UTC m=+0.342920748 container attach 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:02:28 compute-0 podman[231929]: 2026-01-21 14:02:28.130442508 +0000 UTC m=+0.343490703 container died 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 14:02:28 compute-0 ceph-mon[75031]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:28 compute-0 python3.9[232023]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:28 compute-0 sudo[232017]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0c1168f56506f6d806d758d6a4f76803f993f05c98565b04bccba38fe487d72-merged.mount: Deactivated successfully.
Jan 21 14:02:28 compute-0 podman[231929]: 2026-01-21 14:02:28.402602736 +0000 UTC m=+0.615650931 container remove 371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 14:02:28 compute-0 systemd[1]: libpod-conmon-371929b618a094f0ee6dcaef0421582cea3aed041a1949842286713bc17cc947.scope: Deactivated successfully.
Jan 21 14:02:28 compute-0 sudo[232208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfuahdbmnfguthjrwmmftankpmghduxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004148.3397176-756-42570937387582/AnsiballZ_file.py'
Jan 21 14:02:28 compute-0 sudo[232208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:28 compute-0 podman[232169]: 2026-01-21 14:02:28.554095323 +0000 UTC m=+0.024751158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:02:28 compute-0 python3.9[232210]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:29 compute-0 sudo[232208]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:29 compute-0 podman[232169]: 2026-01-21 14:02:29.052197755 +0000 UTC m=+0.522853610 container create 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:02:29 compute-0 systemd[1]: Started libpod-conmon-9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510.scope.
Jan 21 14:02:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:29 compute-0 podman[232169]: 2026-01-21 14:02:29.442626799 +0000 UTC m=+0.913282634 container init 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:02:29 compute-0 podman[232169]: 2026-01-21 14:02:29.452085758 +0000 UTC m=+0.922741573 container start 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 21 14:02:29 compute-0 sudo[232366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmnimvbesyhgcerisugtmqemyagisik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004149.154487-756-57909897921734/AnsiballZ_file.py'
Jan 21 14:02:29 compute-0 sudo[232366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:29 compute-0 ceph-mon[75031]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:29 compute-0 podman[232169]: 2026-01-21 14:02:29.48659528 +0000 UTC m=+0.957251085 container attach 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 14:02:29 compute-0 python3.9[232370]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:29 compute-0 sudo[232366]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]: {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:     "0": [
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:         {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "devices": [
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "/dev/loop3"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             ],
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_name": "ceph_lv0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_size": "21470642176",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "name": "ceph_lv0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "tags": {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cluster_name": "ceph",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.crush_device_class": "",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.encrypted": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.objectstore": "bluestore",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osd_id": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.type": "block",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.vdo": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.with_tpm": "0"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             },
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "type": "block",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "vg_name": "ceph_vg0"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:         }
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:     ],
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:     "1": [
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:         {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "devices": [
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "/dev/loop4"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             ],
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_name": "ceph_lv1",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_size": "21470642176",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "name": "ceph_lv1",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "tags": {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cluster_name": "ceph",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.crush_device_class": "",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.encrypted": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.objectstore": "bluestore",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osd_id": "1",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.type": "block",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.vdo": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.with_tpm": "0"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             },
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "type": "block",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "vg_name": "ceph_vg1"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:         }
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:     ],
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:     "2": [
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:         {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "devices": [
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "/dev/loop5"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             ],
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_name": "ceph_lv2",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_size": "21470642176",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "name": "ceph_lv2",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "tags": {
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.cluster_name": "ceph",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.crush_device_class": "",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.encrypted": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.objectstore": "bluestore",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osd_id": "2",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.type": "block",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.vdo": "0",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:                 "ceph.with_tpm": "0"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             },
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "type": "block",
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:             "vg_name": "ceph_vg2"
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:         }
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]:     ]
Jan 21 14:02:29 compute-0 friendly_cartwright[232246]: }
Jan 21 14:02:29 compute-0 systemd[1]: libpod-9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510.scope: Deactivated successfully.
Jan 21 14:02:29 compute-0 podman[232399]: 2026-01-21 14:02:29.80023319 +0000 UTC m=+0.026688245 container died 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:02:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-429b27cecfebede8b297425b802c63f172ad41703158158008e2a768e05e2ab2-merged.mount: Deactivated successfully.
Jan 21 14:02:30 compute-0 sudo[232536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdernssetkcksjikmokkbrqldvvyuozh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004149.8085957-756-69120169424306/AnsiballZ_file.py'
Jan 21 14:02:30 compute-0 sudo[232536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:30 compute-0 podman[232399]: 2026-01-21 14:02:30.121410453 +0000 UTC m=+0.347865468 container remove 9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cartwright, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:02:30 compute-0 systemd[1]: libpod-conmon-9a634d8183b196379d15579204a58d4dfec3a3f0861ea400b24d6bbc0c609510.scope: Deactivated successfully.
Jan 21 14:02:30 compute-0 sudo[231816]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:30 compute-0 python3.9[232538]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:30 compute-0 sudo[232539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:02:30 compute-0 sudo[232539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:30 compute-0 sudo[232536]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:30 compute-0 sudo[232539]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:30 compute-0 sudo[232564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:02:30 compute-0 sudo[232564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:30 compute-0 podman[232624]: 2026-01-21 14:02:30.624491806 +0000 UTC m=+0.061947286 container create f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:02:30 compute-0 systemd[1]: Started libpod-conmon-f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa.scope.
Jan 21 14:02:30 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:02:30 compute-0 podman[232624]: 2026-01-21 14:02:30.599225816 +0000 UTC m=+0.036681266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:02:30 compute-0 podman[232624]: 2026-01-21 14:02:30.742405312 +0000 UTC m=+0.179860782 container init f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 14:02:30 compute-0 podman[232624]: 2026-01-21 14:02:30.748909749 +0000 UTC m=+0.186365189 container start f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:02:30 compute-0 optimistic_turing[232640]: 167 167
Jan 21 14:02:30 compute-0 systemd[1]: libpod-f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa.scope: Deactivated successfully.
Jan 21 14:02:30 compute-0 podman[232624]: 2026-01-21 14:02:30.755938549 +0000 UTC m=+0.193393999 container attach f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:02:30 compute-0 podman[232624]: 2026-01-21 14:02:30.757728012 +0000 UTC m=+0.195183482 container died f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ac6d5e081745b881d9092df7983a3ee78b6af1d8394af26ed238ef4c7b175a-merged.mount: Deactivated successfully.
Jan 21 14:02:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:31 compute-0 podman[232624]: 2026-01-21 14:02:31.483862189 +0000 UTC m=+0.921317689 container remove f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:02:31 compute-0 systemd[1]: libpod-conmon-f283337b7ea58bc50a75c13927ca82cff6734c0f772dcf852edf8e431033d6fa.scope: Deactivated successfully.
Jan 21 14:02:31 compute-0 podman[232665]: 2026-01-21 14:02:31.655950622 +0000 UTC m=+0.025752433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:02:31 compute-0 ceph-mon[75031]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:32 compute-0 podman[232665]: 2026-01-21 14:02:32.104794675 +0000 UTC m=+0.474596476 container create 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:02:32 compute-0 ceph-mon[75031]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:32 compute-0 systemd[1]: Started libpod-conmon-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope.
Jan 21 14:02:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:02:33 compute-0 podman[232665]: 2026-01-21 14:02:33.046875514 +0000 UTC m=+1.416677325 container init 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:02:33 compute-0 podman[232665]: 2026-01-21 14:02:33.0525006 +0000 UTC m=+1.422302391 container start 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:02:33 compute-0 podman[232665]: 2026-01-21 14:02:33.273778271 +0000 UTC m=+1.643580072 container attach 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:02:33 compute-0 lvm[232773]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:02:33 compute-0 lvm[232769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:02:33 compute-0 lvm[232769]: VG ceph_vg0 finished
Jan 21 14:02:33 compute-0 lvm[232773]: VG ceph_vg2 finished
Jan 21 14:02:33 compute-0 lvm[232772]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:02:33 compute-0 lvm[232772]: VG ceph_vg1 finished
Jan 21 14:02:33 compute-0 vibrant_pascal[232681]: {}
Jan 21 14:02:33 compute-0 podman[232756]: 2026-01-21 14:02:33.858863713 +0000 UTC m=+0.121817601 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 14:02:33 compute-0 systemd[1]: libpod-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope: Deactivated successfully.
Jan 21 14:02:33 compute-0 podman[232665]: 2026-01-21 14:02:33.881008278 +0000 UTC m=+2.250810089 container died 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:02:33 compute-0 systemd[1]: libpod-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope: Consumed 1.279s CPU time.
Jan 21 14:02:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:02:33.892 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:02:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:02:33.892 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:02:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:02:33.893 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:02:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ad2b38aaf19d25e671f00a9f57da134ce3d5a914b21497abe76980f16a7ab4-merged.mount: Deactivated successfully.
Jan 21 14:02:34 compute-0 podman[232665]: 2026-01-21 14:02:34.58629012 +0000 UTC m=+2.956091951 container remove 30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 14:02:34 compute-0 sudo[232564]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:02:34 compute-0 systemd[1]: libpod-conmon-30569d71f1a7a07f92a99858f8412598f06935c6eef9589d7b9a9e14b9710693.scope: Deactivated successfully.
Jan 21 14:02:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:02:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:02:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:02:34 compute-0 sudo[232801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:02:34 compute-0 sudo[232801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:02:34 compute-0 sudo[232801]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:35 compute-0 sudo[232951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwgokjmkgsapesamtgunaqikkbbhfery ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004155.0160503-945-56385034555002/AnsiballZ_getent.py'
Jan 21 14:02:35 compute-0 sudo[232951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:35 compute-0 python3.9[232953]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 21 14:02:35 compute-0 sudo[232951]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:35 compute-0 ceph-mon[75031]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:02:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:02:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:36 compute-0 sudo[233104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzwnnoskgmymizrggabvwyymebsscxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004155.8224769-953-107531338109189/AnsiballZ_group.py'
Jan 21 14:02:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:36 compute-0 sudo[233104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:36 compute-0 python3.9[233106]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 21 14:02:37 compute-0 groupadd[233107]: group added to /etc/group: name=nova, GID=42436
Jan 21 14:02:37 compute-0 ceph-mon[75031]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:37 compute-0 groupadd[233107]: group added to /etc/gshadow: name=nova
Jan 21 14:02:37 compute-0 groupadd[233107]: new group: name=nova, GID=42436
Jan 21 14:02:37 compute-0 sudo[233104]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:38 compute-0 sudo[233262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmfjowxjopihlturuhadubhnpgcnaron ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004158.063609-961-172262316523200/AnsiballZ_user.py'
Jan 21 14:02:38 compute-0 sudo[233262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:38 compute-0 ceph-mon[75031]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:38 compute-0 python3.9[233264]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 21 14:02:38 compute-0 useradd[233266]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 21 14:02:39 compute-0 useradd[233266]: add 'nova' to group 'libvirt'
Jan 21 14:02:39 compute-0 useradd[233266]: add 'nova' to shadow group 'libvirt'
Jan 21 14:02:39 compute-0 sudo[233262]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:02:39
Jan 21 14:02:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:02:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:02:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'vms', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Jan 21 14:02:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:02:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:40 compute-0 sshd-session[233297]: Accepted publickey for zuul from 192.168.122.30 port 48244 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 14:02:40 compute-0 systemd-logind[780]: New session 51 of user zuul.
Jan 21 14:02:40 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 21 14:02:40 compute-0 sshd-session[233297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 14:02:40 compute-0 sshd-session[233300]: Received disconnect from 192.168.122.30 port 48244:11: disconnected by user
Jan 21 14:02:40 compute-0 sshd-session[233300]: Disconnected from user zuul 192.168.122.30 port 48244
Jan 21 14:02:40 compute-0 sshd-session[233297]: pam_unix(sshd:session): session closed for user zuul
Jan 21 14:02:40 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 21 14:02:40 compute-0 systemd-logind[780]: Session 51 logged out. Waiting for processes to exit.
Jan 21 14:02:40 compute-0 systemd-logind[780]: Removed session 51.
Jan 21 14:02:40 compute-0 python3.9[233450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:02:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:02:41 compute-0 ceph-mon[75031]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:41 compute-0 podman[233545]: 2026-01-21 14:02:41.364998089 +0000 UTC m=+0.073577138 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:02:41 compute-0 python3.9[233582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004160.4831302-986-271394484781077/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:42 compute-0 python3.9[233740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:42 compute-0 python3.9[233816]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:43 compute-0 python3.9[233966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:43 compute-0 ceph-mon[75031]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:43 compute-0 python3.9[234087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004162.7616181-986-200570131928423/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:44 compute-0 python3.9[234237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:44 compute-0 python3.9[234358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004163.9033308-986-123740259837830/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:45 compute-0 ceph-mon[75031]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:45 compute-0 python3.9[234508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:45 compute-0 python3.9[234629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004165.0469036-986-253796967057318/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:46 compute-0 python3.9[234779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:47 compute-0 python3.9[234900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004166.1347642-986-139852597775498/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:47 compute-0 ceph-mon[75031]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:47 compute-0 sudo[235050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqtpbfghbqrcabbcnxhehogymdivadxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004167.3769674-1069-66381284057524/AnsiballZ_file.py'
Jan 21 14:02:47 compute-0 sudo[235050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:47 compute-0 python3.9[235052]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:47 compute-0 sudo[235050]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:48 compute-0 sudo[235202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywixbuccusybxmxwilbfucllnouzcdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004168.0550725-1077-172852463453068/AnsiballZ_copy.py'
Jan 21 14:02:48 compute-0 sudo[235202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:48 compute-0 python3.9[235204]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:02:48 compute-0 sudo[235202]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:49 compute-0 sudo[235354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izubcaxhrlhdhbczakbejbbxghjsrawf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004168.6758246-1085-110576582679581/AnsiballZ_stat.py'
Jan 21 14:02:49 compute-0 sudo[235354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:49 compute-0 python3.9[235356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:02:49 compute-0 sudo[235354]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:49 compute-0 sudo[235506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pluzqvfmvriqltwiugolnktiuvyjtlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004169.3950312-1093-89801238780914/AnsiballZ_stat.py'
Jan 21 14:02:49 compute-0 sudo[235506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:49 compute-0 ceph-mon[75031]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:49 compute-0 python3.9[235508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:49 compute-0 sudo[235506]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:50 compute-0 sudo[235629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmbtiszbzpfkvkjfhgemzcsbmlqzgzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004169.3950312-1093-89801238780914/AnsiballZ_copy.py'
Jan 21 14:02:50 compute-0 sudo[235629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:50 compute-0 python3.9[235631]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769004169.3950312-1093-89801238780914/.source _original_basename=.075ictz7 follow=False checksum=11fa0e566769e53db26861537ed860b1d9835ba8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 21 14:02:50 compute-0 sudo[235629]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:02:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:02:51 compute-0 ceph-mon[75031]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:51 compute-0 python3.9[235783]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:02:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:51 compute-0 python3.9[235935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:52 compute-0 python3.9[236056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004171.4475796-1119-35897265561990/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:52 compute-0 python3.9[236206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 21 14:02:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:54 compute-0 ceph-mon[75031]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:54 compute-0 python3.9[236327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769004172.5697136-1134-269264532895467/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 21 14:02:55 compute-0 sudo[236477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oionaqvycyxtjsdhobbvjqgazrhmdgvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004174.5863087-1151-233951574304705/AnsiballZ_container_config_data.py'
Jan 21 14:02:55 compute-0 sudo[236477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:55 compute-0 python3.9[236479]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 21 14:02:55 compute-0 sudo[236477]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:56 compute-0 ceph-mon[75031]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:56 compute-0 sudo[236629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqmovicgomlneqbgdydywpwufwdebpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004175.6567683-1162-1088513972574/AnsiballZ_container_config_hash.py'
Jan 21 14:02:56 compute-0 sudo[236629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:56 compute-0 python3.9[236631]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 14:02:56 compute-0 sudo[236629]: pam_unix(sudo:session): session closed for user root
Jan 21 14:02:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:02:57 compute-0 sudo[236781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuzzdivjcvmyvdpmqdzvlmgpslnfbfwp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769004176.7062814-1172-15867984815085/AnsiballZ_edpm_container_manage.py'
Jan 21 14:02:57 compute-0 sudo[236781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:02:57 compute-0 python3[236783]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 14:02:57 compute-0 ceph-mon[75031]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:02:58 compute-0 ceph-mon[75031]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:01 compute-0 ceph-mon[75031]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:06 compute-0 podman[236841]: 2026-01-21 14:03:06.102747021 +0000 UTC m=+1.819229722 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:03:06 compute-0 ceph-mon[75031]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:03:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:03:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:03:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:03:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:03:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:03:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:17 compute-0 ceph-mon[75031]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:17 compute-0 ceph-mon[75031]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:17 compute-0 ceph-mon[75031]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:17 compute-0 podman[236882]: 2026-01-21 14:03:17.993452991 +0000 UTC m=+5.723851246 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 21 14:03:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:18 compute-0 podman[236797]: 2026-01-21 14:03:18.126334387 +0000 UTC m=+20.599214217 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 14:03:18 compute-0 podman[236924]: 2026-01-21 14:03:18.298345036 +0000 UTC m=+0.090613489 container create be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Jan 21 14:03:18 compute-0 podman[236924]: 2026-01-21 14:03:18.243653121 +0000 UTC m=+0.035921654 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 14:03:18 compute-0 python3[236783]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 21 14:03:18 compute-0 sudo[236781]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:18 compute-0 sudo[237112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfsmawcyjqixmuohtjxteoddcrozpznx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004198.6074908-1180-160452821802862/AnsiballZ_stat.py'
Jan 21 14:03:18 compute-0 sudo[237112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:18 compute-0 ceph-mon[75031]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:18 compute-0 ceph-mon[75031]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:18 compute-0 ceph-mon[75031]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:18 compute-0 ceph-mon[75031]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:18 compute-0 ceph-mon[75031]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:19 compute-0 python3.9[237114]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:03:19 compute-0 sudo[237112]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:19 compute-0 sudo[237266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giizzhhudpolkuzbdgmawdbelgfujtto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004199.517712-1192-133828178462778/AnsiballZ_container_config_data.py'
Jan 21 14:03:19 compute-0 sudo[237266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:19 compute-0 python3.9[237268]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 21 14:03:20 compute-0 sudo[237266]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:20 compute-0 sudo[237418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhfoisbpbbqyrwgpmbbmfjuyucthvpdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004200.2985127-1203-177723802846889/AnsiballZ_container_config_hash.py'
Jan 21 14:03:20 compute-0 sudo[237418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:20 compute-0 python3.9[237420]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 21 14:03:20 compute-0 sudo[237418]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:21 compute-0 ceph-mon[75031]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:21 compute-0 sudo[237570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfcglemoarputggzncibnxnvcphsokic ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769004201.08429-1213-108909275954984/AnsiballZ_edpm_container_manage.py'
Jan 21 14:03:21 compute-0 sudo[237570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:21 compute-0 python3[237572]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 21 14:03:21 compute-0 podman[237608]: 2026-01-21 14:03:21.748572689 +0000 UTC m=+0.055091064 container create 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 21 14:03:21 compute-0 podman[237608]: 2026-01-21 14:03:21.71724826 +0000 UTC m=+0.023766665 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 21 14:03:21 compute-0 python3[237572]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 21 14:03:21 compute-0 sudo[237570]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:22 compute-0 sudo[237796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqlbdcwqtelqrucmlfmkbqcdlwavaoro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004202.072888-1221-103647414953972/AnsiballZ_stat.py'
Jan 21 14:03:22 compute-0 sudo[237796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:22 compute-0 python3.9[237798]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:03:22 compute-0 sudo[237796]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:23 compute-0 sudo[237950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poznyptmfvmoncmtwnbjtplwbxxqmfdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004202.854506-1230-269810493938339/AnsiballZ_file.py'
Jan 21 14:03:23 compute-0 sudo[237950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:23 compute-0 python3.9[237952]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:03:23 compute-0 sudo[237950]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:23 compute-0 ceph-mon[75031]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:23 compute-0 sudo[238101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbnppmonrdcxdxpjdcnioifrwmkcfiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004203.3996093-1230-145735232277953/AnsiballZ_copy.py'
Jan 21 14:03:23 compute-0 sudo[238101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:23 compute-0 python3.9[238103]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769004203.3996093-1230-145735232277953/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 21 14:03:23 compute-0 sudo[238101]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:24 compute-0 sudo[238177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odbscvvmhgdkvapxekeuihjrqddcqfqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004203.3996093-1230-145735232277953/AnsiballZ_systemd.py'
Jan 21 14:03:24 compute-0 sudo[238177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:24 compute-0 python3.9[238179]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 21 14:03:24 compute-0 systemd[1]: Reloading.
Jan 21 14:03:24 compute-0 systemd-rc-local-generator[238205]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:03:24 compute-0 systemd-sysv-generator[238208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:03:24 compute-0 sudo[238177]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:25 compute-0 sudo[238287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpqwheppiysllljyqcpybiylgttjkkfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004203.3996093-1230-145735232277953/AnsiballZ_systemd.py'
Jan 21 14:03:25 compute-0 sudo[238287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:25 compute-0 python3.9[238289]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 21 14:03:25 compute-0 systemd[1]: Reloading.
Jan 21 14:03:25 compute-0 systemd-sysv-generator[238317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 21 14:03:25 compute-0 systemd-rc-local-generator[238314]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 21 14:03:25 compute-0 ceph-mon[75031]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:25 compute-0 systemd[1]: Starting nova_compute container...
Jan 21 14:03:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:26 compute-0 podman[238328]: 2026-01-21 14:03:26.320168341 +0000 UTC m=+0.431534080 container init 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:03:26 compute-0 podman[238328]: 2026-01-21 14:03:26.326307321 +0000 UTC m=+0.437673020 container start 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 21 14:03:26 compute-0 nova_compute[238343]: + sudo -E kolla_set_configs
Jan 21 14:03:26 compute-0 podman[238328]: nova_compute
Jan 21 14:03:26 compute-0 systemd[1]: Started nova_compute container.
Jan 21 14:03:26 compute-0 sudo[238287]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Validating config file
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying service configuration files
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Deleting /etc/ceph
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Creating directory /etc/ceph
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Writing out command to execute
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:26 compute-0 nova_compute[238343]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 14:03:26 compute-0 nova_compute[238343]: ++ cat /run_command
Jan 21 14:03:26 compute-0 nova_compute[238343]: + CMD=nova-compute
Jan 21 14:03:26 compute-0 nova_compute[238343]: + ARGS=
Jan 21 14:03:26 compute-0 nova_compute[238343]: + sudo kolla_copy_cacerts
Jan 21 14:03:26 compute-0 nova_compute[238343]: + [[ ! -n '' ]]
Jan 21 14:03:26 compute-0 nova_compute[238343]: + . kolla_extend_start
Jan 21 14:03:26 compute-0 nova_compute[238343]: Running command: 'nova-compute'
Jan 21 14:03:26 compute-0 nova_compute[238343]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 14:03:26 compute-0 nova_compute[238343]: + umask 0022
Jan 21 14:03:26 compute-0 nova_compute[238343]: + exec nova-compute
Jan 21 14:03:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:26 compute-0 ceph-mon[75031]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:27 compute-0 python3.9[238504]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:03:27 compute-0 python3.9[238655]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:03:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.702 238347 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.702 238347 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.703 238347 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.703 238347 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 21 14:03:28 compute-0 python3.9[238805]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.842 238347 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.868 238347 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:03:28 compute-0 nova_compute[238343]: 2026-01-21 14:03:28.868 238347 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 21 14:03:28 compute-0 ceph-mon[75031]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.450 238347 INFO nova.virt.driver [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 21 14:03:29 compute-0 sudo[238959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbdqwypbzvwzhepxozpiftzmqnkhazb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004209.0488968-1290-241383292350646/AnsiballZ_podman_container.py'
Jan 21 14:03:29 compute-0 sudo[238959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.581 238347 INFO nova.compute.provider_config [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.594 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.595 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.596 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.597 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.598 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.599 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.600 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.601 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.602 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.603 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.604 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.605 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.606 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.607 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.608 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.609 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.610 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.611 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.612 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.613 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.614 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.615 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.616 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.617 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.618 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.619 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.620 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.621 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.622 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.623 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.624 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.625 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.626 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.627 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.628 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.629 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.630 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.631 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.632 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.633 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.634 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.635 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.636 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.637 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.638 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.639 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.640 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.641 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.642 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.643 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.644 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.645 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.646 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.647 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.648 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.649 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.650 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.651 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.652 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.653 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.654 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.655 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.656 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.657 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.658 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.659 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.660 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.661 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.662 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.663 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.664 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.665 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.666 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.667 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.668 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.669 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.670 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.671 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.672 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.673 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.674 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.675 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 WARNING oslo_config.cfg [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 14:03:29 compute-0 nova_compute[238343]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 14:03:29 compute-0 nova_compute[238343]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 14:03:29 compute-0 nova_compute[238343]: and ``live_migration_inbound_addr`` respectively.
Jan 21 14:03:29 compute-0 nova_compute[238343]: ).  Its value may be silently ignored in the future.
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.676 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.677 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.678 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.679 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_secret_uuid        = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.680 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.681 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.682 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.683 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.684 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.685 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.686 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.687 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.688 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.689 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.690 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.691 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.692 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.693 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.694 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.695 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.696 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.697 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.698 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.699 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.700 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.701 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.702 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.703 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.704 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.705 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.706 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.707 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.708 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.709 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.710 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.711 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.712 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.713 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.714 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.715 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.716 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.717 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.718 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.719 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.720 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.721 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.722 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.723 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.724 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.725 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.726 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.727 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.728 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.729 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.730 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.731 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.732 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.733 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.734 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.735 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.736 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.737 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.738 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.739 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.740 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.741 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.742 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.743 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.744 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.745 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.745 238347 DEBUG oslo_service.service [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.746 238347 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.760 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.761 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.761 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.762 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 21 14:03:29 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 21 14:03:29 compute-0 python3.9[238961]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 14:03:29 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 21 14:03:29 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:03:29 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.850 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f04c87b7280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.853 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f04c87b7280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.854 238347 INFO nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Connection event '1' reason 'None'
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.869 238347 WARNING nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 21 14:03:29 compute-0 nova_compute[238343]: 2026-01-21 14:03:29.869 238347 DEBUG nova.virt.libvirt.volume.mount [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 21 14:03:29 compute-0 sudo[238959]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:30 compute-0 sudo[239191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaecyfddkudbbipztufhusfbduxjzcoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004210.1206841-1298-152915948840213/AnsiballZ_systemd.py'
Jan 21 14:03:30 compute-0 sudo[239191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:30 compute-0 nova_compute[238343]: 2026-01-21 14:03:30.782 238347 INFO nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]: 
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <host>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <uuid>7823760d-0166-4122-8fb2-3165351e57e7</uuid>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <arch>x86_64</arch>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model>EPYC-Rome-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <vendor>AMD</vendor>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <microcode version='16777317'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <signature family='23' model='49' stepping='0'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='x2apic'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='tsc-deadline'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='osxsave'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='hypervisor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='tsc_adjust'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='spec-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='stibp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='arch-capabilities'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='cmp_legacy'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='topoext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='virt-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='lbrv'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='tsc-scale'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='vmcb-clean'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='pause-filter'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='pfthreshold'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='svme-addr-chk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='rdctl-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='skip-l1dfl-vmentry'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='mds-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature name='pschange-mc-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <pages unit='KiB' size='4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <pages unit='KiB' size='2048'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <pages unit='KiB' size='1048576'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <power_management>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <suspend_mem/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </power_management>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <iommu support='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <migration_features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <live/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <uri_transports>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <uri_transport>tcp</uri_transport>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <uri_transport>rdma</uri_transport>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </uri_transports>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </migration_features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <topology>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <cells num='1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <cell id='0'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           <memory unit='KiB'>7864316</memory>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           <pages unit='KiB' size='2048'>0</pages>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           <distances>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <sibling id='0' value='10'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           </distances>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           <cpus num='8'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:           </cpus>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         </cell>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </cells>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </topology>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <cache>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </cache>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <secmodel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model>selinux</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <doi>0</doi>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </secmodel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <secmodel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model>dac</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <doi>0</doi>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </secmodel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </host>
Jan 21 14:03:30 compute-0 nova_compute[238343]: 
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <guest>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <os_type>hvm</os_type>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <arch name='i686'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <wordsize>32</wordsize>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <domain type='qemu'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <domain type='kvm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </arch>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <pae/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <nonpae/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <acpi default='on' toggle='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <apic default='on' toggle='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <cpuselection/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <deviceboot/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <disksnapshot default='on' toggle='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <externalSnapshot/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </guest>
Jan 21 14:03:30 compute-0 nova_compute[238343]: 
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <guest>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <os_type>hvm</os_type>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <arch name='x86_64'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <wordsize>64</wordsize>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <domain type='qemu'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <domain type='kvm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </arch>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <acpi default='on' toggle='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <apic default='on' toggle='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <cpuselection/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <deviceboot/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <disksnapshot default='on' toggle='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <externalSnapshot/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </guest>
Jan 21 14:03:30 compute-0 nova_compute[238343]: 
Jan 21 14:03:30 compute-0 nova_compute[238343]: </capabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]: 
Jan 21 14:03:30 compute-0 nova_compute[238343]: 2026-01-21 14:03:30.802 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 21 14:03:30 compute-0 nova_compute[238343]: 2026-01-21 14:03:30.831 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 14:03:30 compute-0 nova_compute[238343]: <domainCapabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <domain>kvm</domain>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <arch>i686</arch>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <vcpu max='4096'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <iothreads supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <os supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <enum name='firmware'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <loader supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>rom</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pflash</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='readonly'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>yes</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='secure'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </loader>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </os>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='maximumMigratable'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <vendor>AMD</vendor>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='succor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='custom' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cooperlake'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='KnightsMill'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='athlon'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='athlon-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='core2duo'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='core2duo-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='coreduo'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='coreduo-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='n270'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='n270-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='phenom'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='phenom-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <memoryBacking supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <enum name='sourceType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>file</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>anonymous</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>memfd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </memoryBacking>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <devices>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <disk supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='diskDevice'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>disk</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>cdrom</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>floppy</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>lun</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>fdc</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>sata</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </disk>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <graphics supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vnc</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>egl-headless</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </graphics>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <video supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='modelType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vga</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>cirrus</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>none</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>bochs</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>ramfb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </video>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <hostdev supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='mode'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>subsystem</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='startupPolicy'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>mandatory</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>requisite</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>optional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='subsysType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pci</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='capsType'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='pciBackend'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </hostdev>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <rng supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>random</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>egd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </rng>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <filesystem supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='driverType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>path</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>handle</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtiofs</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </filesystem>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <tpm supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tpm-tis</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tpm-crb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>emulator</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>external</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendVersion'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>2.0</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </tpm>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <redirdev supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </redirdev>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <channel supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </channel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <crypto supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>qemu</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </crypto>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <interface supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>passt</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </interface>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <panic supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>isa</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>hyperv</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </panic>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <console supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>null</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vc</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>dev</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>file</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pipe</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>stdio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>udp</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tcp</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>qemu-vdagent</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </console>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </devices>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <gic supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <genid supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <backup supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <async-teardown supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <s390-pv supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <ps2 supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <tdx supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <sev supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <sgx supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <hyperv supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='features'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>relaxed</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vapic</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>spinlocks</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vpindex</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>runtime</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>synic</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>stimer</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>reset</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vendor_id</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>frequencies</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>reenlightenment</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tlbflush</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>ipi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>avic</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>emsr_bitmap</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>xmm_input</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <defaults>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </defaults>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </hyperv>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <launchSecurity supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </features>
Jan 21 14:03:30 compute-0 nova_compute[238343]: </domainCapabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:30 compute-0 nova_compute[238343]: 2026-01-21 14:03:30.851 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 14:03:30 compute-0 nova_compute[238343]: <domainCapabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <domain>kvm</domain>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <arch>i686</arch>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <vcpu max='240'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <iothreads supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <os supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <enum name='firmware'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <loader supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>rom</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pflash</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='readonly'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>yes</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='secure'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </loader>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </os>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='maximumMigratable'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <vendor>AMD</vendor>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='succor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='custom' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 python3.9[239193]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cooperlake'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Denverton-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='EPYC-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Haswell-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='KnightsMill'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='athlon'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='athlon-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='core2duo'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='core2duo-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='coreduo'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='coreduo-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='n270'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='n270-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:30 compute-0 systemd[1]: Stopping nova_compute container...
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='phenom'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='phenom-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <memoryBacking supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <enum name='sourceType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>file</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>anonymous</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>memfd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </memoryBacking>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <devices>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <disk supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='diskDevice'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>disk</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>cdrom</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>floppy</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>lun</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>ide</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>fdc</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>sata</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </disk>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <graphics supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vnc</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>egl-headless</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </graphics>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <video supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='modelType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vga</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>cirrus</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>none</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>bochs</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>ramfb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </video>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <hostdev supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='mode'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>subsystem</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='startupPolicy'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>mandatory</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>requisite</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>optional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='subsysType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pci</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='capsType'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='pciBackend'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </hostdev>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <rng supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>random</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>egd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </rng>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <filesystem supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='driverType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>path</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>handle</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>virtiofs</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </filesystem>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <tpm supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tpm-tis</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tpm-crb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>emulator</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>external</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendVersion'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>2.0</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </tpm>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <redirdev supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </redirdev>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <channel supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </channel>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <crypto supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>qemu</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </crypto>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <interface supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='backendType'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>passt</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </interface>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <panic supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>isa</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>hyperv</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </panic>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <console supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>null</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vc</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>dev</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>file</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pipe</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>stdio</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>udp</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tcp</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>qemu-vdagent</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </console>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </devices>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <features>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <gic supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <genid supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <backup supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <async-teardown supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <s390-pv supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <ps2 supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <tdx supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <sev supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <sgx supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <hyperv supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='features'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>relaxed</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vapic</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>spinlocks</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vpindex</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>runtime</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>synic</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>stimer</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>reset</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>vendor_id</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>frequencies</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>reenlightenment</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>tlbflush</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>ipi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>avic</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>emsr_bitmap</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>xmm_input</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <defaults>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </defaults>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </hyperv>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <launchSecurity supported='no'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </features>
Jan 21 14:03:30 compute-0 nova_compute[238343]: </domainCapabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:30 compute-0 nova_compute[238343]: 2026-01-21 14:03:30.928 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 21 14:03:30 compute-0 nova_compute[238343]: 2026-01-21 14:03:30.933 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 14:03:30 compute-0 nova_compute[238343]: <domainCapabilities>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <domain>kvm</domain>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <arch>x86_64</arch>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <vcpu max='4096'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <iothreads supported='yes'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <os supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <enum name='firmware'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>efi</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <loader supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>rom</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>pflash</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='readonly'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>yes</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='secure'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>yes</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </loader>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   </os>
Jan 21 14:03:30 compute-0 nova_compute[238343]:   <cpu>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <enum name='maximumMigratable'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <vendor>AMD</vendor>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='succor'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:30 compute-0 nova_compute[238343]:     <mode name='custom' supported='yes'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:30 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:30 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cooperlake'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='KnightsMill'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='athlon'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='athlon-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='core2duo'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='core2duo-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='coreduo'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='coreduo-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='n270'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='n270-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='phenom'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='phenom-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </cpu>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <memoryBacking supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <enum name='sourceType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>file</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>anonymous</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>memfd</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </memoryBacking>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <devices>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <disk supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='diskDevice'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>disk</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>cdrom</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>floppy</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>lun</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>fdc</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>sata</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </disk>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <graphics supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vnc</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>egl-headless</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </graphics>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <video supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='modelType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vga</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>cirrus</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>none</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>bochs</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>ramfb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </video>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <hostdev supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='mode'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>subsystem</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='startupPolicy'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>mandatory</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>requisite</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>optional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='subsysType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pci</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='capsType'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='pciBackend'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </hostdev>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <rng supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>random</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>egd</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </rng>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <filesystem supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='driverType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>path</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>handle</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtiofs</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </filesystem>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <tpm supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tpm-tis</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tpm-crb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>emulator</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>external</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendVersion'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>2.0</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </tpm>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <redirdev supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </redirdev>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <channel supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </channel>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <crypto supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>qemu</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </crypto>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <interface supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>passt</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </interface>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <panic supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>isa</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>hyperv</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </panic>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <console supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>null</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vc</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>dev</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>file</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pipe</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>stdio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>udp</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tcp</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>qemu-vdagent</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </console>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </devices>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <features>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <gic supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <genid supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <backup supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <async-teardown supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <s390-pv supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <ps2 supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <tdx supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <sev supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <sgx supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <hyperv supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='features'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>relaxed</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vapic</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>spinlocks</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vpindex</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>runtime</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>synic</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>stimer</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>reset</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vendor_id</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>frequencies</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>reenlightenment</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tlbflush</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>ipi</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>avic</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>emsr_bitmap</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>xmm_input</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <defaults>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </defaults>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </hyperv>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <launchSecurity supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </features>
Jan 21 14:03:31 compute-0 nova_compute[238343]: </domainCapabilities>
Jan 21 14:03:31 compute-0 nova_compute[238343]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.005 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 21 14:03:31 compute-0 nova_compute[238343]: <domainCapabilities>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <domain>kvm</domain>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <arch>x86_64</arch>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <vcpu max='240'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <iothreads supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <os supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <enum name='firmware'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <loader supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>rom</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pflash</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='readonly'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>yes</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='secure'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>no</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </loader>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </os>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <cpu>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='maximumMigratable'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>on</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>off</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <vendor>AMD</vendor>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='succor'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <mode name='custom' supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ddpd-u'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sha512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm3'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sm4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cooperlake'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Denverton-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amd-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='auto-ibrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='perfmon-v2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbpb'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='stibp-always-on'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='EPYC-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-128'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-256'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx10-512'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='prefetchiti'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Haswell-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='KnightsMill'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512er'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512pf'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fma4'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tbm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xop'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='amx-tile'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-bf16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-fp16'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bitalg'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrc'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fzrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='la57'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='taa-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ifma'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cmpccxadd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fbsdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='fsrs'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ibrs-all'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='intel-psfd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='lam'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mcdt-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pbrsb-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='psdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='serialize'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vaes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='hle'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='rtm'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512bw'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512cd'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512dq'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512f'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='avx512vl'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='invpcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pcid'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='pku'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='mpx'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='core-capability'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='split-lock-detect'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='cldemote'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='erms'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='gfni'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdir64b'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='movdiri'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='xsaves'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='athlon'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='athlon-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='core2duo'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='core2duo-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='coreduo'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='coreduo-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='n270'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='n270-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='ss'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='phenom'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <blockers model='phenom-v1'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnow'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <feature name='3dnowext'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </blockers>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </mode>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </cpu>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <memoryBacking supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <enum name='sourceType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>file</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>anonymous</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <value>memfd</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </memoryBacking>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <devices>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <disk supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='diskDevice'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>disk</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>cdrom</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>floppy</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>lun</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>ide</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>fdc</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>sata</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </disk>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <graphics supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vnc</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>egl-headless</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </graphics>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <video supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='modelType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vga</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>cirrus</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>none</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>bochs</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>ramfb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </video>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <hostdev supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='mode'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>subsystem</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='startupPolicy'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>mandatory</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>requisite</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>optional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='subsysType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pci</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>scsi</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='capsType'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='pciBackend'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </hostdev>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <rng supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtio-non-transitional</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>random</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>egd</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </rng>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <filesystem supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='driverType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>path</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>handle</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>virtiofs</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </filesystem>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <tpm supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tpm-tis</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tpm-crb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>emulator</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>external</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendVersion'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>2.0</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </tpm>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <redirdev supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='bus'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>usb</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </redirdev>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <channel supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </channel>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <crypto supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>qemu</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendModel'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>builtin</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </crypto>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <interface supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='backendType'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>default</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>passt</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </interface>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <panic supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='model'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>isa</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>hyperv</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </panic>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <console supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='type'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>null</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vc</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pty</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>dev</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>file</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>pipe</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>stdio</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>udp</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tcp</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>unix</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>qemu-vdagent</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>dbus</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </console>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </devices>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   <features>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <gic supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <genid supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <backup supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <async-teardown supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <s390-pv supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <ps2 supported='yes'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <tdx supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <sev supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <sgx supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <hyperv supported='yes'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <enum name='features'>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>relaxed</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vapic</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>spinlocks</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vpindex</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>runtime</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>synic</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>stimer</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>reset</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>vendor_id</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>frequencies</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>reenlightenment</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>tlbflush</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>ipi</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>avic</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>emsr_bitmap</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <value>xmm_input</value>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </enum>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       <defaults>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:31 compute-0 nova_compute[238343]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:31 compute-0 nova_compute[238343]:       </defaults>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     </hyperv>
Jan 21 14:03:31 compute-0 nova_compute[238343]:     <launchSecurity supported='no'/>
Jan 21 14:03:31 compute-0 nova_compute[238343]:   </features>
Jan 21 14:03:31 compute-0 nova_compute[238343]: </domainCapabilities>
Jan 21 14:03:31 compute-0 nova_compute[238343]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.072 238347 DEBUG nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.072 238347 INFO nova.virt.libvirt.host [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Secure Boot support detected
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.075 238347 INFO nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.075 238347 INFO nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.087 238347 DEBUG nova.virt.libvirt.driver [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.184 238347 INFO nova.virt.node [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Determined node identity 172aa181-ce4f-4953-808e-b8a26e60249f from /var/lib/nova/compute_id
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.207 238347 WARNING nova.compute.manager [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Compute nodes ['172aa181-ce4f-4953-808e-b8a26e60249f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.249 238347 INFO nova.compute.manager [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.292 238347 WARNING nova.compute.manager [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.292 238347 DEBUG oslo_concurrency.lockutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG oslo_concurrency.lockutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG oslo_concurrency.lockutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG nova.compute.resource_tracker [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.293 238347 DEBUG oslo_concurrency.processutils [None req-3d4fd6fa-f781-4822-ba75-e73bde2bc6d9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.347 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.348 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 21 14:03:31 compute-0 nova_compute[238343]: 2026-01-21 14:03:31.348 238347 DEBUG oslo_concurrency.lockutils [None req-74ed9064-1d79-48ab-a1e2-1e7d2a1f2917 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 21 14:03:31 compute-0 ceph-mon[75031]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:31 compute-0 virtqemud[238983]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 21 14:03:31 compute-0 virtqemud[238983]: hostname: compute-0
Jan 21 14:03:31 compute-0 virtqemud[238983]: End of file while reading data: Input/output error
Jan 21 14:03:31 compute-0 systemd[1]: libpod-7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e.scope: Deactivated successfully.
Jan 21 14:03:31 compute-0 systemd[1]: libpod-7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e.scope: Consumed 3.241s CPU time.
Jan 21 14:03:31 compute-0 podman[239201]: 2026-01-21 14:03:31.804878737 +0000 UTC m=+0.800379357 container died 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e-userdata-shm.mount: Deactivated successfully.
Jan 21 14:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff-merged.mount: Deactivated successfully.
Jan 21 14:03:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:32 compute-0 podman[239201]: 2026-01-21 14:03:32.799330212 +0000 UTC m=+1.794830812 container cleanup 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:03:32 compute-0 podman[239201]: nova_compute
Jan 21 14:03:32 compute-0 podman[239232]: nova_compute
Jan 21 14:03:32 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 21 14:03:32 compute-0 systemd[1]: Stopped nova_compute container.
Jan 21 14:03:32 compute-0 systemd[1]: Starting nova_compute container...
Jan 21 14:03:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eec4ecc9de903144bcb0da93f0db313e6ba60791f9ba6e846e4064e9f9cbff/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:33 compute-0 podman[239246]: 2026-01-21 14:03:33.482461725 +0000 UTC m=+0.594721410 container init 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:03:33 compute-0 podman[239246]: 2026-01-21 14:03:33.489725124 +0000 UTC m=+0.601984779 container start 7d944b57858544dab7860736b12ae3a5a4228efe41bd7d07e43d89ba039edd6e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Jan 21 14:03:33 compute-0 podman[239246]: nova_compute
Jan 21 14:03:33 compute-0 nova_compute[239261]: + sudo -E kolla_set_configs
Jan 21 14:03:33 compute-0 systemd[1]: Started nova_compute container.
Jan 21 14:03:33 compute-0 sudo[239191]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Validating config file
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying service configuration files
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /etc/ceph
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Creating directory /etc/ceph
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Writing out command to execute
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:33 compute-0 nova_compute[239261]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 21 14:03:33 compute-0 nova_compute[239261]: ++ cat /run_command
Jan 21 14:03:33 compute-0 nova_compute[239261]: + CMD=nova-compute
Jan 21 14:03:33 compute-0 nova_compute[239261]: + ARGS=
Jan 21 14:03:33 compute-0 nova_compute[239261]: + sudo kolla_copy_cacerts
Jan 21 14:03:33 compute-0 nova_compute[239261]: + [[ ! -n '' ]]
Jan 21 14:03:33 compute-0 nova_compute[239261]: + . kolla_extend_start
Jan 21 14:03:33 compute-0 nova_compute[239261]: + echo 'Running command: '\''nova-compute'\'''
Jan 21 14:03:33 compute-0 nova_compute[239261]: Running command: 'nova-compute'
Jan 21 14:03:33 compute-0 nova_compute[239261]: + umask 0022
Jan 21 14:03:33 compute-0 nova_compute[239261]: + exec nova-compute
Jan 21 14:03:33 compute-0 ceph-mon[75031]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:03:33.893 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:03:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:03:33.894 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:03:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:03:33.894 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:03:34 compute-0 sudo[239422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fstnjjytvddahemqterrcgobotydjood ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769004213.7503939-1307-222683062444450/AnsiballZ_podman_container.py'
Jan 21 14:03:34 compute-0 sudo[239422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:03:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:34 compute-0 python3.9[239424]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 21 14:03:34 compute-0 systemd[1]: Started libpod-conmon-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c.scope.
Jan 21 14:03:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:34 compute-0 podman[239452]: 2026-01-21 14:03:34.489889759 +0000 UTC m=+0.133090472 container init be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 14:03:34 compute-0 podman[239452]: 2026-01-21 14:03:34.506464227 +0000 UTC m=+0.149664910 container start be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 14:03:34 compute-0 python3.9[239424]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Applying nova statedir ownership
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 21 14:03:34 compute-0 nova_compute_init[239473]: INFO:nova_statedir:Nova statedir ownership complete
Jan 21 14:03:34 compute-0 systemd[1]: libpod-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c.scope: Deactivated successfully.
Jan 21 14:03:34 compute-0 podman[239488]: 2026-01-21 14:03:34.625492143 +0000 UTC m=+0.024794600 container died be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c-userdata-shm.mount: Deactivated successfully.
Jan 21 14:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-73754ab590c92a8ebd550293de75b61a5236028489e78a0708a64563da83fab0-merged.mount: Deactivated successfully.
Jan 21 14:03:34 compute-0 podman[239488]: 2026-01-21 14:03:34.652773374 +0000 UTC m=+0.052075831 container cleanup be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:03:34 compute-0 systemd[1]: libpod-conmon-be93323987fef98411e1c741f6ccc371d1528388c708a9c47fb0b729db0ca57c.scope: Deactivated successfully.
Jan 21 14:03:34 compute-0 sudo[239422]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:34 compute-0 sudo[239535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:03:34 compute-0 sudo[239535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:34 compute-0 sudo[239535]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:34 compute-0 sudo[239560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:03:34 compute-0 sudo[239560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:35 compute-0 sshd-session[214884]: Connection closed by 192.168.122.30 port 57222
Jan 21 14:03:35 compute-0 sshd-session[214868]: pam_unix(sshd:session): session closed for user zuul
Jan 21 14:03:35 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 21 14:03:35 compute-0 systemd[1]: session-50.scope: Consumed 2min 3.205s CPU time.
Jan 21 14:03:35 compute-0 systemd-logind[780]: Session 50 logged out. Waiting for processes to exit.
Jan 21 14:03:35 compute-0 systemd-logind[780]: Removed session 50.
Jan 21 14:03:35 compute-0 sudo[239560]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:03:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:03:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:03:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:03:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:03:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:03:35 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:03:35 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.542 239265 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.542 239265 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.543 239265 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.543 239265 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 21 14:03:35 compute-0 sudo[239618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:03:35 compute-0 sudo[239618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:35 compute-0 sudo[239618]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:35 compute-0 sudo[239643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:03:35 compute-0 sudo[239643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.685 239265 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.714 239265 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:03:35 compute-0 nova_compute[239261]: 2026-01-21 14:03:35.714 239265 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 21 14:03:35 compute-0 ceph-mon[75031]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:03:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:03:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:03:35 compute-0 podman[239682]: 2026-01-21 14:03:35.944834476 +0000 UTC m=+0.043873680 container create e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:03:35 compute-0 systemd[1]: Started libpod-conmon-e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3.scope.
Jan 21 14:03:36 compute-0 podman[239682]: 2026-01-21 14:03:35.922457166 +0000 UTC m=+0.021496170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:03:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:36 compute-0 podman[239682]: 2026-01-21 14:03:36.036507629 +0000 UTC m=+0.135546673 container init e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 14:03:36 compute-0 podman[239682]: 2026-01-21 14:03:36.044965577 +0000 UTC m=+0.144004611 container start e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 21 14:03:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:36 compute-0 podman[239682]: 2026-01-21 14:03:36.049121419 +0000 UTC m=+0.148160513 container attach e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:03:36 compute-0 flamboyant_moser[239699]: 167 167
Jan 21 14:03:36 compute-0 systemd[1]: libpod-e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3.scope: Deactivated successfully.
Jan 21 14:03:36 compute-0 podman[239682]: 2026-01-21 14:03:36.053252461 +0000 UTC m=+0.152291485 container died e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee7e10d3bbb4a31c0f93e35eb4ca6c8e37a30a0d952dafabb599399cb750fc12-merged.mount: Deactivated successfully.
Jan 21 14:03:36 compute-0 podman[239682]: 2026-01-21 14:03:36.13986336 +0000 UTC m=+0.238902364 container remove e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_moser, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 14:03:36 compute-0 systemd[1]: libpod-conmon-e6b03e9ac70f6968cbe8c97455e0063734f923287a5694cbd4373e1de38215d3.scope: Deactivated successfully.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.156 239265 INFO nova.virt.driver [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.265 239265 INFO nova.compute.provider_config [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.285 239265 DEBUG oslo_concurrency.lockutils [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.285 239265 DEBUG oslo_concurrency.lockutils [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.286 239265 DEBUG oslo_concurrency.lockutils [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.286 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.286 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.287 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.287 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.287 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.288 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.288 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.288 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.289 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.290 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.291 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.292 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.293 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.294 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.295 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.296 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.296 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.296 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.296697405 +0000 UTC m=+0.040426905 container create cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.297 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.298 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.298 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.298 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.299 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.299 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.299 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.300 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.301 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.302 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.303 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.303 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.303 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.304 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.304 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.304 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.305 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.306 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.307 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.308 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.308 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.308 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.309 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.310 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.311 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.312 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.313 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.314 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.315 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.316 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.317 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.318 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.319 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.320 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.321 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.322 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.323 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.324 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.325 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.326 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.327 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.328 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.329 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.330 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.331 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.332 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.333 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.334 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.335 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.336 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.337 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.338 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.339 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.340 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.341 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 systemd[1]: Started libpod-conmon-cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3.scope.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.342 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.343 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.344 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.345 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.346 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.347 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.348 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.349 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.350 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.351 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.352 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.353 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.354 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.355 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.356 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.357 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.358 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.359 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.360 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.361 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.362 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.363 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.364 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.365 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.366 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.367 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.368 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.369 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.370 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.371 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.372 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.373 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.374 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.278267792 +0000 UTC m=+0.021997312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.375 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.376 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.377 239265 WARNING oslo_config.cfg [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 21 14:03:36 compute-0 nova_compute[239261]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 21 14:03:36 compute-0 nova_compute[239261]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 21 14:03:36 compute-0 nova_compute[239261]: and ``live_migration_inbound_addr`` respectively.
Jan 21 14:03:36 compute-0 nova_compute[239261]: ).  Its value may be silently ignored in the future.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.378 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.379 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.380 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_secret_uuid        = 2f0e9cad-f0a3-5869-9cc3-8d84d071866a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.381 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.382 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.383 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.384 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.385 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.386 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.387 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.388 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.389 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.390 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.391 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.392 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.393 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.394 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.395700809 +0000 UTC m=+0.139430579 container init cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.395 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.396 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.397 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.398 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.399 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.400 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.401 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.40226674 +0000 UTC m=+0.145996240 container start cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.402 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.403 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.404 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.405664314 +0000 UTC m=+0.149393854 container attach cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.405 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.406 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.407 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.408 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.409 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.410 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.411 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.412 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.413 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.414 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.415 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.416 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.417 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.418 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.419 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.420 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.421 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.422 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.423 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.424 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.425 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.426 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.427 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.428 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.429 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.430 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.431 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.432 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.433 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.434 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.435 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.436 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.437 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.438 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.439 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.440 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.441 239265 DEBUG oslo_service.service [None req-bb1d5646-1bfa-4786-8b7b-ecd6cff96861 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.442 239265 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.454 239265 INFO nova.virt.node [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Determined node identity 172aa181-ce4f-4953-808e-b8a26e60249f from /var/lib/nova/compute_id
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.455 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.455 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.455 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.456 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.465 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f41aa53e2b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.467 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f41aa53e2b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.468 239265 INFO nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Connection event '1' reason 'None'
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.473 239265 INFO nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host capabilities <capabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]: 
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <host>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <uuid>7823760d-0166-4122-8fb2-3165351e57e7</uuid>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <arch>x86_64</arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model>EPYC-Rome-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <vendor>AMD</vendor>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <microcode version='16777317'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <signature family='23' model='49' stepping='0'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='x2apic'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='tsc-deadline'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='osxsave'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='hypervisor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='tsc_adjust'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='spec-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='stibp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='arch-capabilities'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='cmp_legacy'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='topoext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='virt-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='lbrv'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='tsc-scale'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='vmcb-clean'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='pause-filter'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='pfthreshold'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='svme-addr-chk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='rdctl-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='skip-l1dfl-vmentry'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='mds-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature name='pschange-mc-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <pages unit='KiB' size='4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <pages unit='KiB' size='2048'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <pages unit='KiB' size='1048576'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <power_management>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <suspend_mem/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </power_management>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <iommu support='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <migration_features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <live/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <uri_transports>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <uri_transport>tcp</uri_transport>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <uri_transport>rdma</uri_transport>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </uri_transports>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </migration_features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <topology>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <cells num='1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <cell id='0'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           <memory unit='KiB'>7864316</memory>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           <pages unit='KiB' size='2048'>0</pages>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           <distances>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <sibling id='0' value='10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           </distances>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           <cpus num='8'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:           </cpus>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         </cell>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </cells>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </topology>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <cache>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </cache>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <secmodel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model>selinux</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <doi>0</doi>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </secmodel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <secmodel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model>dac</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <doi>0</doi>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </secmodel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </host>
Jan 21 14:03:36 compute-0 nova_compute[239261]: 
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <guest>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <os_type>hvm</os_type>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <arch name='i686'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <wordsize>32</wordsize>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <domain type='qemu'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <domain type='kvm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <pae/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <nonpae/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <acpi default='on' toggle='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <apic default='on' toggle='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <cpuselection/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <deviceboot/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <disksnapshot default='on' toggle='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <externalSnapshot/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </guest>
Jan 21 14:03:36 compute-0 nova_compute[239261]: 
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <guest>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <os_type>hvm</os_type>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <arch name='x86_64'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <wordsize>64</wordsize>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <domain type='qemu'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <domain type='kvm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <acpi default='on' toggle='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <apic default='on' toggle='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <cpuselection/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <deviceboot/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <disksnapshot default='on' toggle='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <externalSnapshot/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </guest>
Jan 21 14:03:36 compute-0 nova_compute[239261]: 
Jan 21 14:03:36 compute-0 nova_compute[239261]: </capabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]: 
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.480 239265 DEBUG nova.virt.libvirt.volume.mount [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.486 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.489 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 21 14:03:36 compute-0 nova_compute[239261]: <domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <domain>kvm</domain>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <arch>i686</arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <vcpu max='4096'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <iothreads supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <os supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='firmware'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <loader supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>rom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pflash</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='readonly'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>yes</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='secure'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </loader>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </os>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='maximumMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <vendor>AMD</vendor>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='succor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='custom' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon'>
Jan 21 14:03:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <memoryBacking supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='sourceType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>anonymous</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>memfd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </memoryBacking>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <disk supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='diskDevice'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>disk</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cdrom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>floppy</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>lun</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>fdc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>sata</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </disk>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <graphics supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vnc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egl-headless</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </graphics>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <video supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='modelType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vga</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cirrus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>none</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>bochs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ramfb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </video>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hostdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='mode'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>subsystem</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='startupPolicy'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>mandatory</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>requisite</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>optional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='subsysType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pci</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='capsType'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='pciBackend'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hostdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <rng supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>random</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </rng>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <filesystem supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='driverType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>path</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>handle</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtiofs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </filesystem>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tpm supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-tis</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-crb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emulator</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>external</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendVersion'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>2.0</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </tpm>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <redirdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </redirdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <channel supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </channel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <crypto supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </crypto>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <interface supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>passt</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </interface>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <panic supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>isa</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>hyperv</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </panic>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <console supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>null</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dev</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pipe</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stdio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>udp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tcp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu-vdagent</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </console>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <gic supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <genid supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backup supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <async-teardown supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <s390-pv supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <ps2 supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tdx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sev supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sgx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hyperv supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='features'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>relaxed</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vapic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>spinlocks</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vpindex</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>runtime</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>synic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stimer</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reset</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vendor_id</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>frequencies</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reenlightenment</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tlbflush</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ipi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>avic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emsr_bitmap</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>xmm_input</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hyperv>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <launchSecurity supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </features>
Jan 21 14:03:36 compute-0 nova_compute[239261]: </domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.496 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 21 14:03:36 compute-0 nova_compute[239261]: <domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <domain>kvm</domain>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <arch>i686</arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <vcpu max='240'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <iothreads supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <os supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='firmware'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <loader supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>rom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pflash</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='readonly'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>yes</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='secure'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </loader>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </os>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='maximumMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <vendor>AMD</vendor>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='succor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='custom' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <memoryBacking supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='sourceType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>anonymous</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>memfd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </memoryBacking>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <disk supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='diskDevice'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>disk</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cdrom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>floppy</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>lun</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ide</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>fdc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>sata</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </disk>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <graphics supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vnc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egl-headless</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </graphics>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <video supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='modelType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vga</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cirrus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>none</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>bochs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ramfb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </video>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hostdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='mode'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>subsystem</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='startupPolicy'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>mandatory</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>requisite</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>optional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='subsysType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pci</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='capsType'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='pciBackend'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hostdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <rng supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>random</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </rng>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <filesystem supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='driverType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>path</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>handle</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtiofs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </filesystem>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tpm supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-tis</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-crb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emulator</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>external</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendVersion'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>2.0</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </tpm>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <redirdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </redirdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <channel supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </channel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <crypto supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </crypto>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <interface supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>passt</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </interface>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <panic supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>isa</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>hyperv</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </panic>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <console supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>null</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dev</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pipe</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stdio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>udp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tcp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu-vdagent</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </console>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <gic supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <genid supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backup supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <async-teardown supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <s390-pv supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <ps2 supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tdx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sev supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sgx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hyperv supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='features'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>relaxed</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vapic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>spinlocks</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vpindex</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>runtime</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>synic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stimer</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reset</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vendor_id</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>frequencies</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reenlightenment</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tlbflush</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ipi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>avic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emsr_bitmap</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>xmm_input</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hyperv>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <launchSecurity supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </features>
Jan 21 14:03:36 compute-0 nova_compute[239261]: </domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.553 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.557 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 21 14:03:36 compute-0 nova_compute[239261]: <domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <domain>kvm</domain>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <arch>x86_64</arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <vcpu max='4096'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <iothreads supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <os supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='firmware'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>efi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <loader supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>rom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pflash</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='readonly'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>yes</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='secure'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>yes</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </loader>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </os>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='maximumMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <vendor>AMD</vendor>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='succor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='custom' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <memoryBacking supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='sourceType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>anonymous</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>memfd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </memoryBacking>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <disk supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='diskDevice'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>disk</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cdrom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>floppy</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>lun</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>fdc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>sata</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </disk>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <graphics supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vnc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egl-headless</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </graphics>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <video supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='modelType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vga</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cirrus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>none</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>bochs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ramfb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </video>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hostdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='mode'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>subsystem</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='startupPolicy'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>mandatory</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>requisite</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>optional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='subsysType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pci</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='capsType'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='pciBackend'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hostdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <rng supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>random</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </rng>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <filesystem supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='driverType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>path</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>handle</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtiofs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </filesystem>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tpm supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-tis</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-crb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emulator</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>external</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendVersion'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>2.0</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </tpm>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <redirdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </redirdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <channel supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </channel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <crypto supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </crypto>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <interface supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>passt</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </interface>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <panic supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>isa</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>hyperv</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </panic>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <console supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>null</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dev</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pipe</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stdio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>udp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tcp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu-vdagent</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </console>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <gic supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <genid supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backup supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <async-teardown supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <s390-pv supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <ps2 supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tdx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sev supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sgx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hyperv supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='features'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>relaxed</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vapic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>spinlocks</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vpindex</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>runtime</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>synic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stimer</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reset</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vendor_id</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>frequencies</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reenlightenment</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tlbflush</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ipi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>avic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emsr_bitmap</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>xmm_input</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hyperv>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <launchSecurity supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </features>
Jan 21 14:03:36 compute-0 nova_compute[239261]: </domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.656 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 21 14:03:36 compute-0 nova_compute[239261]: <domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <path>/usr/libexec/qemu-kvm</path>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <domain>kvm</domain>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <arch>x86_64</arch>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <vcpu max='240'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <iothreads supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <os supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='firmware'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <loader supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>rom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pflash</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='readonly'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>yes</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='secure'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>no</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </loader>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </os>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-passthrough' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='hostPassthroughMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='maximum' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='maximumMigratable'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>on</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>off</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='host-model' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <vendor>AMD</vendor>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='x2apic'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-deadline'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='hypervisor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc_adjust'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='spec-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='stibp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='cmp_legacy'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='overflow-recov'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='succor'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='amd-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='virt-ssbd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lbrv'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='tsc-scale'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='vmcb-clean'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='flushbyasid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pause-filter'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='pfthreshold'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='svme-addr-chk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <feature policy='disable' name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <mode name='custom' supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Broadwell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cascadelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='ClearwaterForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ddpd-u'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sha512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm3'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sm4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Cooperlake-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Denverton-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Dhyana-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Genoa-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Milan-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Rome-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-Turin-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amd-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='auto-ibrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vp2intersect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fs-gs-base-ns'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibpb-brtype'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='no-nested-data-bp'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='null-sel-clr-base'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='perfmon-v2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbpb'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='srso-user-kernel-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='stibp-always-on'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='EPYC-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='GraniteRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-128'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-256'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx10-512'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='prefetchiti'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Haswell-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-noTSX'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v6'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Icelake-Server-v7'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='IvyBridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='KnightsMill-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4fmaps'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-4vnniw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512er'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512pf'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G4-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Opteron_G5-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fma4'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tbm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xop'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SapphireRapids-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='amx-tile'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-bf16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-fp16'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512-vpopcntdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bitalg'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vbmi2'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrc'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fzrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='la57'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='taa-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='tsx-ldtrk'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='SierraForest-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ifma'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-ne-convert'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx-vnni-int8'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bhi-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='bus-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cmpccxadd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fbsdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='fsrs'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ibrs-all'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='intel-psfd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ipred-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='lam'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mcdt-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pbrsb-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='psdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rrsba-ctrl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='sbdr-ssdp-no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='serialize'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vaes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='vpclmulqdq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Client-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='hle'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='rtm'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Skylake-Server-v5'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512bw'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512cd'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512dq'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512f'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='avx512vl'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='invpcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pcid'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='pku'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='mpx'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v2'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v3'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='core-capability'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='split-lock-detect'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='Snowridge-v4'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='cldemote'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='erms'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='gfni'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdir64b'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='movdiri'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='xsaves'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='athlon-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='core2duo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='coreduo-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='n270-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='ss'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <blockers model='phenom-v1'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnow'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <feature name='3dnowext'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </blockers>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </mode>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </cpu>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <memoryBacking supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <enum name='sourceType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>anonymous</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <value>memfd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </memoryBacking>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <disk supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='diskDevice'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>disk</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cdrom</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>floppy</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>lun</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ide</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>fdc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>sata</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </disk>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <graphics supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vnc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egl-headless</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </graphics>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <video supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='modelType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vga</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>cirrus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>none</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>bochs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ramfb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </video>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hostdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='mode'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>subsystem</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='startupPolicy'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>mandatory</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>requisite</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>optional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='subsysType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pci</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>scsi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='capsType'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='pciBackend'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hostdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <rng supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtio-non-transitional</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>random</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>egd</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </rng>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <filesystem supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='driverType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>path</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>handle</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>virtiofs</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </filesystem>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tpm supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-tis</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tpm-crb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emulator</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>external</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendVersion'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>2.0</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </tpm>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <redirdev supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='bus'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>usb</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </redirdev>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <channel supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </channel>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <crypto supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendModel'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>builtin</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </crypto>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <interface supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='backendType'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>default</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>passt</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </interface>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <panic supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='model'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>isa</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>hyperv</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </panic>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <console supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='type'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>null</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vc</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pty</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dev</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>file</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>pipe</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stdio</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>udp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tcp</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>unix</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>qemu-vdagent</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>dbus</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </console>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </devices>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   <features>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <gic supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <vmcoreinfo supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <genid supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backingStoreInput supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <backup supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <async-teardown supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <s390-pv supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <ps2 supported='yes'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <tdx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sev supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <sgx supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <hyperv supported='yes'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <enum name='features'>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>relaxed</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vapic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>spinlocks</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vpindex</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>runtime</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>synic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>stimer</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reset</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>vendor_id</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>frequencies</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>reenlightenment</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>tlbflush</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>ipi</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>avic</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>emsr_bitmap</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <value>xmm_input</value>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </enum>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       <defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <spinlocks>4095</spinlocks>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <stimer_direct>on</stimer_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_direct>on</tlbflush_direct>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <tlbflush_extended>on</tlbflush_extended>
Jan 21 14:03:36 compute-0 nova_compute[239261]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 21 14:03:36 compute-0 nova_compute[239261]:       </defaults>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     </hyperv>
Jan 21 14:03:36 compute-0 nova_compute[239261]:     <launchSecurity supported='no'/>
Jan 21 14:03:36 compute-0 nova_compute[239261]:   </features>
Jan 21 14:03:36 compute-0 nova_compute[239261]: </domainCapabilities>
Jan 21 14:03:36 compute-0 nova_compute[239261]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.754 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.755 239265 INFO nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Secure Boot support detected
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.758 239265 INFO nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.758 239265 INFO nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.770 239265 DEBUG nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.793 239265 INFO nova.virt.node [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Determined node identity 172aa181-ce4f-4953-808e-b8a26e60249f from /var/lib/nova/compute_id
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.813 239265 WARNING nova.compute.manager [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Compute nodes ['172aa181-ce4f-4953-808e-b8a26e60249f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.851 239265 INFO nova.compute.manager [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.873 239265 WARNING nova.compute.manager [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.874 239265 DEBUG oslo_concurrency.lockutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.874 239265 DEBUG oslo_concurrency.lockutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.874 239265 DEBUG oslo_concurrency.lockutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.874 239265 DEBUG nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:03:36 compute-0 nova_compute[239261]: 2026-01-21 14:03:36.875 239265 DEBUG oslo_concurrency.processutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:03:36 compute-0 naughty_napier[239740]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:03:36 compute-0 naughty_napier[239740]: --> All data devices are unavailable
Jan 21 14:03:36 compute-0 systemd[1]: libpod-cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3.scope: Deactivated successfully.
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.951921902 +0000 UTC m=+0.695651412 container died cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:03:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2ad59a83bbaa6f1cfdc2ad87d29aea5ab2c329f2775235b27450833a971ab42-merged.mount: Deactivated successfully.
Jan 21 14:03:36 compute-0 podman[239724]: 2026-01-21 14:03:36.998041636 +0000 UTC m=+0.741771136 container remove cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 21 14:03:37 compute-0 systemd[1]: libpod-conmon-cc4062a23a20422f8a08d74c5c765b1b592332c57332375a12c0f5c12533f1f3.scope: Deactivated successfully.
Jan 21 14:03:37 compute-0 sudo[239643]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:37 compute-0 sudo[239813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:03:37 compute-0 sudo[239813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:37 compute-0 sudo[239813]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:37 compute-0 sudo[239838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:03:37 compute-0 sudo[239838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:37 compute-0 rsyslogd[1002]: imjournal from <np0005590528:nova_compute>: begin to drop messages due to rate-limiting
Jan 21 14:03:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:03:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772679129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.382 239265 DEBUG oslo_concurrency.processutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:03:37 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 21 14:03:37 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.477634245 +0000 UTC m=+0.038376004 container create 818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 14:03:37 compute-0 systemd[1]: Started libpod-conmon-818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158.scope.
Jan 21 14:03:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.459734646 +0000 UTC m=+0.020476425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.558753419 +0000 UTC m=+0.119495228 container init 818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.567628448 +0000 UTC m=+0.128370247 container start 818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 21 14:03:37 compute-0 youthful_satoshi[239915]: 167 167
Jan 21 14:03:37 compute-0 systemd[1]: libpod-818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158.scope: Deactivated successfully.
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.572520498 +0000 UTC m=+0.133262267 container attach 818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_satoshi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.572850646 +0000 UTC m=+0.133592405 container died 818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 21 14:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f2c3415e8c4dda2ebb760da35b1c4d964d64f25951097db21b8628ae0216919-merged.mount: Deactivated successfully.
Jan 21 14:03:37 compute-0 podman[239878]: 2026-01-21 14:03:37.608955443 +0000 UTC m=+0.169697202 container remove 818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_satoshi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 14:03:37 compute-0 systemd[1]: libpod-conmon-818318c2e011d18a07cc7b4fc15f92baf3604a9b65e2fdc1b9afede0a668e158.scope: Deactivated successfully.
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.697 239265 WARNING nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.698 239265 DEBUG nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.699 239265 DEBUG oslo_concurrency.lockutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.699 239265 DEBUG oslo_concurrency.lockutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.718 239265 WARNING nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] No compute node record for compute-0.ctlplane.example.com:172aa181-ce4f-4953-808e-b8a26e60249f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 172aa181-ce4f-4953-808e-b8a26e60249f could not be found.
Jan 21 14:03:37 compute-0 ceph-mon[75031]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:37 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/772679129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.745 239265 INFO nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 172aa181-ce4f-4953-808e-b8a26e60249f
Jan 21 14:03:37 compute-0 podman[239939]: 2026-01-21 14:03:37.762327354 +0000 UTC m=+0.037577995 container create 1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 14:03:37 compute-0 systemd[1]: Started libpod-conmon-1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2.scope.
Jan 21 14:03:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.819 239265 DEBUG nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c085848ba0a104c105fcb721376907d89217cb9262efcbb6aa3a87a21497573b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:37 compute-0 nova_compute[239261]: 2026-01-21 14:03:37.820 239265 DEBUG nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c085848ba0a104c105fcb721376907d89217cb9262efcbb6aa3a87a21497573b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c085848ba0a104c105fcb721376907d89217cb9262efcbb6aa3a87a21497573b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c085848ba0a104c105fcb721376907d89217cb9262efcbb6aa3a87a21497573b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:37 compute-0 podman[239939]: 2026-01-21 14:03:37.836388795 +0000 UTC m=+0.111639446 container init 1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:03:37 compute-0 podman[239939]: 2026-01-21 14:03:37.747629302 +0000 UTC m=+0.022879963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:03:37 compute-0 podman[239939]: 2026-01-21 14:03:37.842031423 +0000 UTC m=+0.117282064 container start 1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:03:37 compute-0 podman[239939]: 2026-01-21 14:03:37.845842496 +0000 UTC m=+0.121093157 container attach 1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 14:03:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]: {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:     "0": [
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:         {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "devices": [
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "/dev/loop3"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             ],
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_name": "ceph_lv0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_size": "21470642176",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "name": "ceph_lv0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "tags": {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cluster_name": "ceph",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.crush_device_class": "",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.encrypted": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.objectstore": "bluestore",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osd_id": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.type": "block",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.vdo": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.with_tpm": "0"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             },
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "type": "block",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "vg_name": "ceph_vg0"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:         }
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:     ],
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:     "1": [
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:         {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "devices": [
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "/dev/loop4"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             ],
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_name": "ceph_lv1",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_size": "21470642176",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "name": "ceph_lv1",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "tags": {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cluster_name": "ceph",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.crush_device_class": "",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.encrypted": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.objectstore": "bluestore",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osd_id": "1",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.type": "block",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.vdo": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.with_tpm": "0"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             },
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "type": "block",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "vg_name": "ceph_vg1"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:         }
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:     ],
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:     "2": [
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:         {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "devices": [
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "/dev/loop5"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             ],
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_name": "ceph_lv2",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_size": "21470642176",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "name": "ceph_lv2",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "tags": {
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.cluster_name": "ceph",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.crush_device_class": "",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.encrypted": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.objectstore": "bluestore",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osd_id": "2",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.type": "block",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.vdo": "0",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:                 "ceph.with_tpm": "0"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             },
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "type": "block",
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:             "vg_name": "ceph_vg2"
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:         }
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]:     ]
Jan 21 14:03:38 compute-0 agitated_mcclintock[239954]: }
Jan 21 14:03:38 compute-0 systemd[1]: libpod-1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2.scope: Deactivated successfully.
Jan 21 14:03:38 compute-0 podman[239939]: 2026-01-21 14:03:38.169980025 +0000 UTC m=+0.445230666 container died 1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c085848ba0a104c105fcb721376907d89217cb9262efcbb6aa3a87a21497573b-merged.mount: Deactivated successfully.
Jan 21 14:03:38 compute-0 podman[239939]: 2026-01-21 14:03:38.218826916 +0000 UTC m=+0.494077557 container remove 1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 21 14:03:38 compute-0 systemd[1]: libpod-conmon-1b23cacbe06a0a542c819bce6d36328b04f464f2e1b9c3eaa181d77873d413d2.scope: Deactivated successfully.
Jan 21 14:03:38 compute-0 sudo[239838]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:38 compute-0 sudo[239974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:03:38 compute-0 sudo[239974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:38 compute-0 sudo[239974]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:38 compute-0 sudo[239999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:03:38 compute-0 sudo[239999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.68845662 +0000 UTC m=+0.062099167 container create 8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 14:03:38 compute-0 systemd[1]: Started libpod-conmon-8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc.scope.
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.663002124 +0000 UTC m=+0.036644771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:03:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.778711179 +0000 UTC m=+0.152353736 container init 8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.791453732 +0000 UTC m=+0.165096279 container start 8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.795546113 +0000 UTC m=+0.169188680 container attach 8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_fermi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 14:03:38 compute-0 vigilant_fermi[240052]: 167 167
Jan 21 14:03:38 compute-0 systemd[1]: libpod-8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc.scope: Deactivated successfully.
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.797130662 +0000 UTC m=+0.170773249 container died 8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 14:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e95db2db34fe979e1bdbf9f8abc9b09c8b9e5288406fb43cf1f94ccec6228d-merged.mount: Deactivated successfully.
Jan 21 14:03:38 compute-0 podman[240035]: 2026-01-21 14:03:38.839943274 +0000 UTC m=+0.213585821 container remove 8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 14:03:38 compute-0 systemd[1]: libpod-conmon-8639c92dc95114d9d8713b5a9f138a14e67a368279cbf9fae0afc6f7cf73aecc.scope: Deactivated successfully.
Jan 21 14:03:38 compute-0 nova_compute[239261]: 2026-01-21 14:03:38.853 239265 INFO nova.scheduler.client.report [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] [req-f4e5e6a5-f66c-42b0-9761-3e32434aac95] Created resource provider record via placement API for resource provider with UUID 172aa181-ce4f-4953-808e-b8a26e60249f and name compute-0.ctlplane.example.com.
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:39.019247631 +0000 UTC m=+0.058885608 container create 5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 21 14:03:39 compute-0 systemd[1]: Started libpod-conmon-5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede.scope.
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:38.990488725 +0000 UTC m=+0.030126802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:03:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c34845f1671bc3cce17c6445c01225b8fa4c689d0e922da088ce36ea8ff4f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c34845f1671bc3cce17c6445c01225b8fa4c689d0e922da088ce36ea8ff4f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c34845f1671bc3cce17c6445c01225b8fa4c689d0e922da088ce36ea8ff4f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c34845f1671bc3cce17c6445c01225b8fa4c689d0e922da088ce36ea8ff4f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:39.112430672 +0000 UTC m=+0.152068729 container init 5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:39.125704888 +0000 UTC m=+0.165342875 container start 5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:39.130799694 +0000 UTC m=+0.170437741 container attach 5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.266 239265 DEBUG oslo_concurrency.processutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:03:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:03:39
Jan 21 14:03:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:03:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:03:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'backups', '.mgr']
Jan 21 14:03:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:03:39 compute-0 ceph-mon[75031]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:39 compute-0 lvm[240190]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:03:39 compute-0 lvm[240191]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:03:39 compute-0 lvm[240190]: VG ceph_vg0 finished
Jan 21 14:03:39 compute-0 lvm[240191]: VG ceph_vg1 finished
Jan 21 14:03:39 compute-0 lvm[240193]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:03:39 compute-0 lvm[240193]: VG ceph_vg2 finished
Jan 21 14:03:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:03:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129718124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.839 239265 DEBUG oslo_concurrency.processutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.846 239265 DEBUG nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 21 14:03:39 compute-0 nova_compute[239261]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.846 239265 INFO nova.virt.libvirt.host [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] kernel doesn't support AMD SEV
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.847 239265 DEBUG nova.compute.provider_tree [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.847 239265 DEBUG nova.virt.libvirt.driver [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 21 14:03:39 compute-0 agitated_lichterman[240092]: {}
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.896 239265 DEBUG nova.scheduler.client.report [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Updated inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.896 239265 DEBUG nova.compute.provider_tree [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Updating resource provider 172aa181-ce4f-4953-808e-b8a26e60249f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 21 14:03:39 compute-0 nova_compute[239261]: 2026-01-21 14:03:39.896 239265 DEBUG nova.compute.provider_tree [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 14:03:39 compute-0 systemd[1]: libpod-5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede.scope: Deactivated successfully.
Jan 21 14:03:39 compute-0 systemd[1]: libpod-5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede.scope: Consumed 1.293s CPU time.
Jan 21 14:03:39 compute-0 conmon[240092]: conmon 5d35384a549fbdc4a84b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede.scope/container/memory.events
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:39.943603755 +0000 UTC m=+0.983241712 container died 5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 21 14:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c34845f1671bc3cce17c6445c01225b8fa4c689d0e922da088ce36ea8ff4f4-merged.mount: Deactivated successfully.
Jan 21 14:03:39 compute-0 podman[240076]: 2026-01-21 14:03:39.993852749 +0000 UTC m=+1.033490706 container remove 5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lichterman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:03:40 compute-0 systemd[1]: libpod-conmon-5d35384a549fbdc4a84b360c206dea86268f1b950fe57ba35ded29294bda0ede.scope: Deactivated successfully.
Jan 21 14:03:40 compute-0 nova_compute[239261]: 2026-01-21 14:03:40.023 239265 DEBUG nova.compute.provider_tree [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Updating resource provider 172aa181-ce4f-4953-808e-b8a26e60249f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 21 14:03:40 compute-0 sudo[239999]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:03:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:03:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:03:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:03:40 compute-0 sudo[240212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:03:40 compute-0 sudo[240212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:03:40 compute-0 sudo[240212]: pam_unix(sudo:session): session closed for user root
Jan 21 14:03:40 compute-0 nova_compute[239261]: 2026-01-21 14:03:40.291 239265 DEBUG nova.compute.resource_tracker [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:03:40 compute-0 nova_compute[239261]: 2026-01-21 14:03:40.292 239265 DEBUG oslo_concurrency.lockutils [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:03:40 compute-0 nova_compute[239261]: 2026-01-21 14:03:40.292 239265 DEBUG nova.service [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 21 14:03:40 compute-0 nova_compute[239261]: 2026-01-21 14:03:40.374 239265 DEBUG nova.service [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 21 14:03:40 compute-0 nova_compute[239261]: 2026-01-21 14:03:40.375 239265 DEBUG nova.servicegroup.drivers.db [None req-e3fcb015-1daa-4bec-a7c2-e4cd094c6af7 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 21 14:03:40 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3129718124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:03:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:03:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:03:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:03:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:41 compute-0 ceph-mon[75031]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:43 compute-0 ceph-mon[75031]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:44 compute-0 ceph-mon[75031]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:47 compute-0 ceph-mon[75031]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:48 compute-0 podman[240238]: 2026-01-21 14:03:48.322145428 +0000 UTC m=+0.049916909 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:03:48 compute-0 podman[240237]: 2026-01-21 14:03:48.377109509 +0000 UTC m=+0.104880730 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 14:03:49 compute-0 ceph-mon[75031]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:03:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:03:51 compute-0 ceph-mon[75031]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.545283) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004231545342, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1254, "num_deletes": 505, "total_data_size": 1489049, "memory_usage": 1519952, "flush_reason": "Manual Compaction"}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004231557339, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1464379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13676, "largest_seqno": 14929, "table_properties": {"data_size": 1458762, "index_size": 2501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14346, "raw_average_key_size": 18, "raw_value_size": 1445617, "raw_average_value_size": 1816, "num_data_blocks": 115, "num_entries": 796, "num_filter_entries": 796, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004126, "oldest_key_time": 1769004126, "file_creation_time": 1769004231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12109 microseconds, and 4540 cpu microseconds.
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.557392) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1464379 bytes OK
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.557414) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.558954) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.558974) EVENT_LOG_v1 {"time_micros": 1769004231558969, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.558998) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1482262, prev total WAL file size 1482262, number of live WAL files 2.
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.559533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1430KB)], [32(7795KB)]
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004231559576, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9446885, "oldest_snapshot_seqno": -1}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3836 keys, 7546579 bytes, temperature: kUnknown
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004231638004, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7546579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7519093, "index_size": 16816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93879, "raw_average_key_size": 24, "raw_value_size": 7447789, "raw_average_value_size": 1941, "num_data_blocks": 713, "num_entries": 3836, "num_filter_entries": 3836, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.638224) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7546579 bytes
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.639579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.3 rd, 96.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.6 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(11.6) write-amplify(5.2) OK, records in: 4859, records dropped: 1023 output_compression: NoCompression
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.639599) EVENT_LOG_v1 {"time_micros": 1769004231639589, "job": 14, "event": "compaction_finished", "compaction_time_micros": 78503, "compaction_time_cpu_micros": 16591, "output_level": 6, "num_output_files": 1, "total_output_size": 7546579, "num_input_records": 4859, "num_output_records": 3836, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004231639963, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004231641494, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.559474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.641593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.641598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.641600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.641602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:03:51 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:03:51.641604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:03:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:54 compute-0 ceph-mon[75031]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:55 compute-0 ceph-mon[75031]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:03:57 compute-0 ceph-mon[75031]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:03:59 compute-0 ceph-mon[75031]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:04:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4044765144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:04:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:04:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4044765144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:04:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4177767373' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:04:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4177767373' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:01 compute-0 ceph-mon[75031]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:01 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4044765144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4044765144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4177767373' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4177767373' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:04:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288707461' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:04:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:04:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288707461' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:04:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:02 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4288707461' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:04:02 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4288707461' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:04:03 compute-0 ceph-mon[75031]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:05 compute-0 ceph-mon[75031]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:06 compute-0 nova_compute[239261]: 2026-01-21 14:04:06.376 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:06 compute-0 nova_compute[239261]: 2026-01-21 14:04:06.403 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:07 compute-0 ceph-mon[75031]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:09 compute-0 ceph-mon[75031]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:04:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:04:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:04:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:04:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:04:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:04:11 compute-0 ceph-mon[75031]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:13 compute-0 ceph-mon[75031]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:15 compute-0 ceph-mon[75031]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:17 compute-0 ceph-mon[75031]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:19 compute-0 podman[240282]: 2026-01-21 14:04:19.336283068 +0000 UTC m=+0.057217207 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 21 14:04:19 compute-0 podman[240281]: 2026-01-21 14:04:19.371436542 +0000 UTC m=+0.100212974 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 21 14:04:19 compute-0 ceph-mon[75031]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:04:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3357 writes, 15K keys, 3357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3357 writes, 3357 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1236 writes, 5422 keys, 1236 commit groups, 1.0 writes per commit group, ingest: 8.33 MB, 0.01 MB/s
                                           Interval WAL: 1236 writes, 1236 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     40.2      0.40              0.05         7    0.057       0      0       0.0       0.0
                                             L6      1/0    7.20 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7     28.3     23.4      1.82              0.12         6    0.304     24K   3192       0.0       0.0
                                            Sum      1/0    7.20 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     23.2     26.4      2.22              0.18        13    0.171     24K   3192       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     18.2     19.1      1.52              0.07         6    0.254     13K   1949       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     28.3     23.4      1.82              0.12         6    0.304     24K   3192       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     40.5      0.39              0.05         6    0.066       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 2.2 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 308.00 MB usage: 1.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(108,1.71 MB,0.55591%) FilterBlock(14,74.98 KB,0.023775%) IndexBlock(14,153.05 KB,0.0485259%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 14:04:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:21 compute-0 ceph-mon[75031]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:23 compute-0 ceph-mon[75031]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:24 compute-0 ceph-mon[75031]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:27 compute-0 ceph-mon[75031]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:29 compute-0 ceph-mon[75031]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:31 compute-0 ceph-mon[75031]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:32 compute-0 ceph-mon[75031]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:04:33.894 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:04:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:04:33.895 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:04:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:04:33.895 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:04:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:35 compute-0 ceph-mon[75031]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.749 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.750 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.751 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.751 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.752 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.752 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.753 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.753 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.754 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.791 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.791 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.792 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.792 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:04:35 compute-0 nova_compute[239261]: 2026-01-21 14:04:35.793 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:04:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:04:36 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2086561351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.588 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:04:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.790 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.792 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.793 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.793 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.909 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:04:36 compute-0 nova_compute[239261]: 2026-01-21 14:04:36.910 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:04:37 compute-0 nova_compute[239261]: 2026-01-21 14:04:37.057 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:04:37 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2086561351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:04:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:04:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301294913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:04:37 compute-0 nova_compute[239261]: 2026-01-21 14:04:37.670 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:04:37 compute-0 nova_compute[239261]: 2026-01-21 14:04:37.678 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:04:37 compute-0 nova_compute[239261]: 2026-01-21 14:04:37.703 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:04:37 compute-0 nova_compute[239261]: 2026-01-21 14:04:37.704 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:04:37 compute-0 nova_compute[239261]: 2026-01-21 14:04:37.704 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:04:37 compute-0 ceph-osd[86795]: bluestore.MempoolThread fragmentation_score=0.000121 took=0.000016s
Jan 21 14:04:37 compute-0 ceph-osd[85740]: bluestore.MempoolThread fragmentation_score=0.000118 took=0.000014s
Jan 21 14:04:37 compute-0 ceph-osd[87843]: bluestore.MempoolThread fragmentation_score=0.000135 took=0.000023s
Jan 21 14:04:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:38 compute-0 ceph-mon[75031]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:38 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1301294913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:04:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:04:39
Jan 21 14:04:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:04:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:04:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data']
Jan 21 14:04:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:04:39 compute-0 ceph-mon[75031]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:40 compute-0 sudo[240369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:04:40 compute-0 sudo[240369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:40 compute-0 sudo[240369]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:40 compute-0 sudo[240394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:04:40 compute-0 sudo[240394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:40 compute-0 sudo[240394]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:40 compute-0 ceph-mon[75031]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:04:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:04:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:04:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:04:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:04:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:04:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:04:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:04:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:04:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:04:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:04:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:04:41 compute-0 sudo[240450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:04:41 compute-0 sudo[240450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:41 compute-0 sudo[240450]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:41 compute-0 sudo[240475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:04:41 compute-0 sudo[240475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:04:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.400975086 +0000 UTC m=+0.066062534 container create 9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 14:04:41 compute-0 systemd[1]: Started libpod-conmon-9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0.scope.
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.372871676 +0000 UTC m=+0.037959194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:04:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.498908944 +0000 UTC m=+0.163996402 container init 9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.509764571 +0000 UTC m=+0.174852019 container start 9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.513579675 +0000 UTC m=+0.178667143 container attach 9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:04:41 compute-0 upbeat_bhaskara[240527]: 167 167
Jan 21 14:04:41 compute-0 systemd[1]: libpod-9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0.scope: Deactivated successfully.
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.517318336 +0000 UTC m=+0.182405774 container died 9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-da4ca60ec046778b35919d404eb6ac11369d68b86165ba96859f734a1d23fe97-merged.mount: Deactivated successfully.
Jan 21 14:04:41 compute-0 podman[240511]: 2026-01-21 14:04:41.572119283 +0000 UTC m=+0.237206751 container remove 9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 21 14:04:41 compute-0 systemd[1]: libpod-conmon-9d99acd4fb937ccdced502a4b7d59e488b2a49f0604b50068a94a1f9e2fc59c0.scope: Deactivated successfully.
Jan 21 14:04:41 compute-0 podman[240549]: 2026-01-21 14:04:41.77293129 +0000 UTC m=+0.069667244 container create 32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:04:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:41 compute-0 systemd[1]: Started libpod-conmon-32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8.scope.
Jan 21 14:04:41 compute-0 podman[240549]: 2026-01-21 14:04:41.744737597 +0000 UTC m=+0.041473621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:04:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a16de680e84759b372b714c67fd4da9236d7ce948f935295428812b16184e98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a16de680e84759b372b714c67fd4da9236d7ce948f935295428812b16184e98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a16de680e84759b372b714c67fd4da9236d7ce948f935295428812b16184e98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a16de680e84759b372b714c67fd4da9236d7ce948f935295428812b16184e98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a16de680e84759b372b714c67fd4da9236d7ce948f935295428812b16184e98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:41 compute-0 podman[240549]: 2026-01-21 14:04:41.859102198 +0000 UTC m=+0.155838172 container init 32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:04:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:04:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:04:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:04:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:04:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:04:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:04:41 compute-0 podman[240549]: 2026-01-21 14:04:41.872234761 +0000 UTC m=+0.168970725 container start 32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:04:41 compute-0 podman[240549]: 2026-01-21 14:04:41.876193979 +0000 UTC m=+0.172930033 container attach 32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 14:04:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:42 compute-0 dreamy_shockley[240565]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:04:42 compute-0 dreamy_shockley[240565]: --> All data devices are unavailable
Jan 21 14:04:42 compute-0 systemd[1]: libpod-32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8.scope: Deactivated successfully.
Jan 21 14:04:42 compute-0 podman[240549]: 2026-01-21 14:04:42.40509186 +0000 UTC m=+0.701827824 container died 32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 14:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a16de680e84759b372b714c67fd4da9236d7ce948f935295428812b16184e98-merged.mount: Deactivated successfully.
Jan 21 14:04:42 compute-0 podman[240549]: 2026-01-21 14:04:42.454804402 +0000 UTC m=+0.751540376 container remove 32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:04:42 compute-0 systemd[1]: libpod-conmon-32b71fec9c8f1079c39f0b3ea461649ebce0fd3e7fbbed988ad33a28441360e8.scope: Deactivated successfully.
Jan 21 14:04:42 compute-0 sudo[240475]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:42 compute-0 sudo[240596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:04:42 compute-0 sudo[240596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:42 compute-0 sudo[240596]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:42 compute-0 sudo[240621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:04:42 compute-0 sudo[240621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:42 compute-0 ceph-mon[75031]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:43.024398054 +0000 UTC m=+0.052260296 container create 281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 14:04:43 compute-0 systemd[1]: Started libpod-conmon-281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e.scope.
Jan 21 14:04:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:42.999705677 +0000 UTC m=+0.027567969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:43.109868045 +0000 UTC m=+0.137730367 container init 281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:43.120341172 +0000 UTC m=+0.148203444 container start 281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:43.124940436 +0000 UTC m=+0.152802698 container attach 281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 14:04:43 compute-0 loving_hamilton[240673]: 167 167
Jan 21 14:04:43 compute-0 systemd[1]: libpod-281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e.scope: Deactivated successfully.
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:43.12630375 +0000 UTC m=+0.154166022 container died 281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0708957a4676429143ec5cab21760dd177c0f6aeed1c36c5004a46420c7cd3-merged.mount: Deactivated successfully.
Jan 21 14:04:43 compute-0 podman[240657]: 2026-01-21 14:04:43.180549152 +0000 UTC m=+0.208411384 container remove 281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:04:43 compute-0 systemd[1]: libpod-conmon-281b7bd7e0d24941e6fe81297e8283d0e769639fedf83cc3fa9375df360f837e.scope: Deactivated successfully.
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.337720127 +0000 UTC m=+0.052892152 container create ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:04:43 compute-0 systemd[1]: Started libpod-conmon-ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80.scope.
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.313370687 +0000 UTC m=+0.028542722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:04:43 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0726936d277ebd966c9ec5f9652d5036ec98d57d8bb72486a825b65b818c92dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0726936d277ebd966c9ec5f9652d5036ec98d57d8bb72486a825b65b818c92dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0726936d277ebd966c9ec5f9652d5036ec98d57d8bb72486a825b65b818c92dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0726936d277ebd966c9ec5f9652d5036ec98d57d8bb72486a825b65b818c92dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.448822177 +0000 UTC m=+0.163994202 container init ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.456333062 +0000 UTC m=+0.171505067 container start ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.459833778 +0000 UTC m=+0.175005763 container attach ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_davinci, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]: {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:     "0": [
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:         {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "devices": [
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "/dev/loop3"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             ],
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_name": "ceph_lv0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_size": "21470642176",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "name": "ceph_lv0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "tags": {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cluster_name": "ceph",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.crush_device_class": "",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.encrypted": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.objectstore": "bluestore",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osd_id": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.type": "block",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.vdo": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.with_tpm": "0"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             },
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "type": "block",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "vg_name": "ceph_vg0"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:         }
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:     ],
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:     "1": [
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:         {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "devices": [
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "/dev/loop4"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             ],
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_name": "ceph_lv1",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_size": "21470642176",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "name": "ceph_lv1",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "tags": {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cluster_name": "ceph",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.crush_device_class": "",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.encrypted": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.objectstore": "bluestore",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osd_id": "1",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.type": "block",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.vdo": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.with_tpm": "0"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             },
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "type": "block",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "vg_name": "ceph_vg1"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:         }
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:     ],
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:     "2": [
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:         {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "devices": [
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "/dev/loop5"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             ],
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_name": "ceph_lv2",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_size": "21470642176",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "name": "ceph_lv2",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "tags": {
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.cluster_name": "ceph",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.crush_device_class": "",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.encrypted": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.objectstore": "bluestore",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osd_id": "2",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.type": "block",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.vdo": "0",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:                 "ceph.with_tpm": "0"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             },
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "type": "block",
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:             "vg_name": "ceph_vg2"
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:         }
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]:     ]
Jan 21 14:04:43 compute-0 optimistic_davinci[240714]: }
Jan 21 14:04:43 compute-0 systemd[1]: libpod-ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80.scope: Deactivated successfully.
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.771712414 +0000 UTC m=+0.486884499 container died ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 14:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0726936d277ebd966c9ec5f9652d5036ec98d57d8bb72486a825b65b818c92dd-merged.mount: Deactivated successfully.
Jan 21 14:04:43 compute-0 podman[240698]: 2026-01-21 14:04:43.818986876 +0000 UTC m=+0.534158901 container remove ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_davinci, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:04:43 compute-0 systemd[1]: libpod-conmon-ea206d4a7340e071391a9ef242400a79dee2af854f841984e4651228ed2c3c80.scope: Deactivated successfully.
Jan 21 14:04:43 compute-0 sudo[240621]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:43 compute-0 sudo[240735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:04:43 compute-0 sudo[240735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:43 compute-0 sudo[240735]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:44 compute-0 sudo[240760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:04:44 compute-0 sudo[240760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.338885597 +0000 UTC m=+0.029418054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.575824232 +0000 UTC m=+0.266356639 container create 405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:04:44 compute-0 systemd[1]: Started libpod-conmon-405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812.scope.
Jan 21 14:04:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.666052499 +0000 UTC m=+0.356584916 container init 405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.677776168 +0000 UTC m=+0.368308545 container start 405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.68152497 +0000 UTC m=+0.372057367 container attach 405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:04:44 compute-0 brave_spence[240816]: 167 167
Jan 21 14:04:44 compute-0 systemd[1]: libpod-405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812.scope: Deactivated successfully.
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.687871656 +0000 UTC m=+0.378404043 container died 405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 14:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a998464ba47d03527534be89b0f8868f6f7cd4d6d095a7697cbfab255a418e78-merged.mount: Deactivated successfully.
Jan 21 14:04:44 compute-0 podman[240798]: 2026-01-21 14:04:44.732906953 +0000 UTC m=+0.423439350 container remove 405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_spence, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:04:44 compute-0 systemd[1]: libpod-conmon-405cffa8a0c0d9a4d23ea6a0762197be9fb0b6d376756efc04aa74ac9ff27812.scope: Deactivated successfully.
Jan 21 14:04:45 compute-0 podman[240839]: 2026-01-21 14:04:44.916468305 +0000 UTC m=+0.040114317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:04:45 compute-0 podman[240839]: 2026-01-21 14:04:45.089040087 +0000 UTC m=+0.212686089 container create 4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 14:04:45 compute-0 systemd[1]: Started libpod-conmon-4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc.scope.
Jan 21 14:04:45 compute-0 ceph-mon[75031]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a22710af431645f81365ec482a675054fb1312ec6d4080e0f04201f0c5c6e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a22710af431645f81365ec482a675054fb1312ec6d4080e0f04201f0c5c6e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a22710af431645f81365ec482a675054fb1312ec6d4080e0f04201f0c5c6e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a22710af431645f81365ec482a675054fb1312ec6d4080e0f04201f0c5c6e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:04:45 compute-0 podman[240839]: 2026-01-21 14:04:45.3604825 +0000 UTC m=+0.484128512 container init 4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:04:45 compute-0 podman[240839]: 2026-01-21 14:04:45.368244791 +0000 UTC m=+0.491890763 container start 4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:04:45 compute-0 podman[240839]: 2026-01-21 14:04:45.371602783 +0000 UTC m=+0.495248775 container attach 4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hugle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:04:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 21 14:04:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2043529930' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 21 14:04:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 21 14:04:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 21 14:04:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 21 14:04:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:46 compute-0 lvm[240935]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:04:46 compute-0 lvm[240935]: VG ceph_vg0 finished
Jan 21 14:04:46 compute-0 lvm[240934]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:04:46 compute-0 lvm[240934]: VG ceph_vg1 finished
Jan 21 14:04:46 compute-0 lvm[240937]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:04:46 compute-0 lvm[240937]: VG ceph_vg2 finished
Jan 21 14:04:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2043529930' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 21 14:04:46 compute-0 romantic_hugle[240856]: {}
Jan 21 14:04:46 compute-0 systemd[1]: libpod-4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc.scope: Deactivated successfully.
Jan 21 14:04:46 compute-0 systemd[1]: libpod-4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc.scope: Consumed 1.403s CPU time.
Jan 21 14:04:46 compute-0 podman[240839]: 2026-01-21 14:04:46.233144882 +0000 UTC m=+1.356790864 container died 4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a22710af431645f81365ec482a675054fb1312ec6d4080e0f04201f0c5c6e7f-merged.mount: Deactivated successfully.
Jan 21 14:04:46 compute-0 podman[240839]: 2026-01-21 14:04:46.421374409 +0000 UTC m=+1.545020391 container remove 4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hugle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 14:04:46 compute-0 systemd[1]: libpod-conmon-4d6d914539bee0f8cb933e22b320987a7a8869c1d85e43ede07e4e833ca24fcc.scope: Deactivated successfully.
Jan 21 14:04:46 compute-0 sudo[240760]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:04:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:04:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:04:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:04:46 compute-0 sudo[240954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:04:46 compute-0 sudo[240954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:04:46 compute-0 sudo[240954]: pam_unix(sudo:session): session closed for user root
Jan 21 14:04:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:47 compute-0 ceph-mon[75031]: from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 21 14:04:47 compute-0 ceph-mon[75031]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:04:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:04:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:49 compute-0 ceph-mon[75031]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:50 compute-0 podman[240980]: 2026-01-21 14:04:50.358737349 +0000 UTC m=+0.073011346 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 14:04:50 compute-0 podman[240979]: 2026-01-21 14:04:50.400529956 +0000 UTC m=+0.113525712 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:04:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:04:51 compute-0 ceph-mon[75031]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:04:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:53 compute-0 ceph-mon[75031]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:55 compute-0 ceph-mon[75031]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:04:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:00 compute-0 ceph-mon[75031]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:00 compute-0 ceph-mon[75031]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:00 compute-0 ceph-mon[75031]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:02 compute-0 ceph-mon[75031]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:05 compute-0 ceph-mon[75031]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 21 14:05:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 21 14:05:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 21 14:05:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 21 14:05:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 21 14:05:06 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 21 14:05:07 compute-0 ceph-mon[75031]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 21 14:05:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:09 compute-0 ceph-mon[75031]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:05:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:05:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:05:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:05:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:05:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:05:11 compute-0 ceph-mon[75031]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:13 compute-0 ceph-mon[75031]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:15 compute-0 ceph-mon[75031]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:17 compute-0 ceph-mon[75031]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:19 compute-0 ceph-mon[75031]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:21 compute-0 podman[241023]: 2026-01-21 14:05:21.331793462 +0000 UTC m=+0.052468187 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 14:05:21 compute-0 podman[241022]: 2026-01-21 14:05:21.380449096 +0000 UTC m=+0.098668321 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 21 14:05:21 compute-0 ceph-mon[75031]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:05:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/210577449' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:05:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:05:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/210577449' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:05:23 compute-0 ceph-mon[75031]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/210577449' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:05:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/210577449' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:05:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:25 compute-0 ceph-mon[75031]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:27 compute-0 ceph-mon[75031]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:05:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5868 writes, 24K keys, 5868 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5868 writes, 1010 syncs, 5.81 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:05:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:29 compute-0 ceph-mon[75031]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:31 compute-0 ceph-mon[75031]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:05:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7208 writes, 29K keys, 7208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7208 writes, 1431 syncs, 5.04 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:05:33 compute-0 ceph-mon[75031]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:05:33.895 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:05:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:05:33.896 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:05:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:05:33.896 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:05:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:35 compute-0 ceph-mon[75031]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:05:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5720 writes, 24K keys, 5720 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5720 writes, 926 syncs, 6.18 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:05:37 compute-0 ceph-mon[75031]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.698 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.698 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.740 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.742 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.771 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.771 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.771 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.772 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:05:37 compute-0 nova_compute[239261]: 2026-01-21 14:05:37.772 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:05:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:05:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823407037' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.336 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:05:38 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/823407037' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.499 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.500 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5159MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.501 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.501 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.561 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.561 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:05:38 compute-0 nova_compute[239261]: 2026-01-21 14:05:38.577 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:05:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:05:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174190100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:05:39 compute-0 nova_compute[239261]: 2026-01-21 14:05:39.134 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:05:39 compute-0 nova_compute[239261]: 2026-01-21 14:05:39.141 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:05:39 compute-0 nova_compute[239261]: 2026-01-21 14:05:39.200 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:05:39 compute-0 nova_compute[239261]: 2026-01-21 14:05:39.203 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:05:39 compute-0 nova_compute[239261]: 2026-01-21 14:05:39.204 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:05:39 compute-0 ceph-mon[75031]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:39 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1174190100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:05:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:05:39
Jan 21 14:05:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:05:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:05:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.mgr', 'vms']
Jan 21 14:05:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:05:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:40 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 14:05:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:05:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:05:41 compute-0 ceph-mon[75031]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:43 compute-0 ceph-mon[75031]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:45 compute-0 ceph-mon[75031]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:46 compute-0 sudo[241112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:05:46 compute-0 sudo[241112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:46 compute-0 sudo[241112]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:46 compute-0 sudo[241137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 21 14:05:46 compute-0 sudo[241137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:47 compute-0 sudo[241137]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:47 compute-0 sudo[241182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:05:47 compute-0 sudo[241182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:47 compute-0 sudo[241182]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:47 compute-0 sudo[241207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:05:47 compute-0 sudo[241207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:47 compute-0 sudo[241207]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:05:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:05:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:05:47 compute-0 sudo[241263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:05:47 compute-0 sudo[241263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:47 compute-0 sudo[241263]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:48 compute-0 sudo[241288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:05:48 compute-0 sudo[241288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:05:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.439415105 +0000 UTC m=+0.047795704 container create 56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bassi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 14:05:48 compute-0 systemd[1]: Started libpod-conmon-56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc.scope.
Jan 21 14:05:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.418622815 +0000 UTC m=+0.027003464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.526349948 +0000 UTC m=+0.134730587 container init 56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.534254992 +0000 UTC m=+0.142635601 container start 56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.538116057 +0000 UTC m=+0.146496706 container attach 56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bassi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 14:05:48 compute-0 boring_bassi[241343]: 167 167
Jan 21 14:05:48 compute-0 systemd[1]: libpod-56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc.scope: Deactivated successfully.
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.540844453 +0000 UTC m=+0.149225062 container died 56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 14:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c9b2c359261d9fc55fdfe8d33f83bc689eecc0ede0c60a740ad98b031688b8-merged.mount: Deactivated successfully.
Jan 21 14:05:48 compute-0 podman[241326]: 2026-01-21 14:05:48.59534556 +0000 UTC m=+0.203726209 container remove 56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bassi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:05:48 compute-0 systemd[1]: libpod-conmon-56292b6f7dd7348af424eb4ff1915d81bc071d012a4e80683eb4b59b4a0248bc.scope: Deactivated successfully.
Jan 21 14:05:48 compute-0 podman[241365]: 2026-01-21 14:05:48.816008854 +0000 UTC m=+0.059511601 container create bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldwasser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:05:48 compute-0 systemd[1]: Started libpod-conmon-bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798.scope.
Jan 21 14:05:48 compute-0 podman[241365]: 2026-01-21 14:05:48.794404315 +0000 UTC m=+0.037907072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:05:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5e3cf943fd3839c173c88a34dbc8ce94546d564ed717504c14d0015236410c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5e3cf943fd3839c173c88a34dbc8ce94546d564ed717504c14d0015236410c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5e3cf943fd3839c173c88a34dbc8ce94546d564ed717504c14d0015236410c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5e3cf943fd3839c173c88a34dbc8ce94546d564ed717504c14d0015236410c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5e3cf943fd3839c173c88a34dbc8ce94546d564ed717504c14d0015236410c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:48 compute-0 podman[241365]: 2026-01-21 14:05:48.934290526 +0000 UTC m=+0.177793343 container init bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 14:05:48 compute-0 podman[241365]: 2026-01-21 14:05:48.949767136 +0000 UTC m=+0.193269853 container start bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 14:05:48 compute-0 podman[241365]: 2026-01-21 14:05:48.953125408 +0000 UTC m=+0.196628225 container attach bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldwasser, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:05:49 compute-0 ceph-mon[75031]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:49 compute-0 angry_goldwasser[241382]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:05:49 compute-0 angry_goldwasser[241382]: --> All data devices are unavailable
Jan 21 14:05:49 compute-0 systemd[1]: libpod-bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798.scope: Deactivated successfully.
Jan 21 14:05:49 compute-0 podman[241365]: 2026-01-21 14:05:49.48795139 +0000 UTC m=+0.731454157 container died bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f5e3cf943fd3839c173c88a34dbc8ce94546d564ed717504c14d0015236410c-merged.mount: Deactivated successfully.
Jan 21 14:05:49 compute-0 podman[241365]: 2026-01-21 14:05:49.54464867 +0000 UTC m=+0.788151407 container remove bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldwasser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:05:49 compute-0 systemd[1]: libpod-conmon-bc8bf07d389a17ed72352d0271637b6a79e51a9bc430d4c4e1efbefdd5de7798.scope: Deactivated successfully.
Jan 21 14:05:49 compute-0 sudo[241288]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:49 compute-0 sudo[241414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:05:49 compute-0 sudo[241414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:49 compute-0 sudo[241414]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:49 compute-0 sudo[241439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:05:49 compute-0 sudo[241439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.047976039 +0000 UTC m=+0.051203427 container create 4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:05:50 compute-0 systemd[1]: Started libpod-conmon-4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900.scope.
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.022419253 +0000 UTC m=+0.025646651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:05:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.141226226 +0000 UTC m=+0.144453614 container init 4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_panini, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.151618912 +0000 UTC m=+0.154846260 container start 4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.154610865 +0000 UTC m=+0.157838223 container attach 4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 21 14:05:50 compute-0 inspiring_panini[241492]: 167 167
Jan 21 14:05:50 compute-0 systemd[1]: libpod-4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900.scope: Deactivated successfully.
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.156310777 +0000 UTC m=+0.159538185 container died 4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-886e82a22c3f15bb001662577447625aba7fa2c57bf52971a3e5859195647686-merged.mount: Deactivated successfully.
Jan 21 14:05:50 compute-0 podman[241476]: 2026-01-21 14:05:50.205317199 +0000 UTC m=+0.208544577 container remove 4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 14:05:50 compute-0 systemd[1]: libpod-conmon-4ecbc1df975623cb1ea72e14b2d0bdb81c526981debfbdd472daabdfa8900900.scope: Deactivated successfully.
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.445704057 +0000 UTC m=+0.075325109 container create 0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 21 14:05:50 compute-0 systemd[1]: Started libpod-conmon-0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8.scope.
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.415535656 +0000 UTC m=+0.045156798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:05:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:05:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5dc5d14bf839c029b740f3af233d68571f0d6d9d7b60afb4279134e414c7ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5dc5d14bf839c029b740f3af233d68571f0d6d9d7b60afb4279134e414c7ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5dc5d14bf839c029b740f3af233d68571f0d6d9d7b60afb4279134e414c7ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5dc5d14bf839c029b740f3af233d68571f0d6d9d7b60afb4279134e414c7ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.537379286 +0000 UTC m=+0.167000428 container init 0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.543305352 +0000 UTC m=+0.172926404 container start 0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_haslett, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.54731255 +0000 UTC m=+0.176933632 container attach 0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_haslett, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:05:50 compute-0 cranky_haslett[241532]: {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:     "0": [
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:         {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "devices": [
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "/dev/loop3"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             ],
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_name": "ceph_lv0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_size": "21470642176",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "name": "ceph_lv0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "tags": {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cluster_name": "ceph",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.crush_device_class": "",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.encrypted": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.objectstore": "bluestore",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osd_id": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.type": "block",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.vdo": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.with_tpm": "0"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             },
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "type": "block",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "vg_name": "ceph_vg0"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:         }
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:     ],
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:     "1": [
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:         {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "devices": [
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "/dev/loop4"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             ],
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_name": "ceph_lv1",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_size": "21470642176",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "name": "ceph_lv1",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "tags": {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cluster_name": "ceph",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.crush_device_class": "",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.encrypted": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.objectstore": "bluestore",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osd_id": "1",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.type": "block",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.vdo": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.with_tpm": "0"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             },
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "type": "block",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "vg_name": "ceph_vg1"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:         }
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:     ],
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:     "2": [
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:         {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "devices": [
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "/dev/loop5"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             ],
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_name": "ceph_lv2",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_size": "21470642176",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "name": "ceph_lv2",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "tags": {
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.cluster_name": "ceph",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.crush_device_class": "",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.encrypted": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.objectstore": "bluestore",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osd_id": "2",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.type": "block",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.vdo": "0",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:                 "ceph.with_tpm": "0"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             },
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "type": "block",
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:             "vg_name": "ceph_vg2"
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:         }
Jan 21 14:05:50 compute-0 cranky_haslett[241532]:     ]
Jan 21 14:05:50 compute-0 cranky_haslett[241532]: }
Jan 21 14:05:50 compute-0 systemd[1]: libpod-0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8.scope: Deactivated successfully.
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.842291857 +0000 UTC m=+0.471912969 container died 0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 14:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5dc5d14bf839c029b740f3af233d68571f0d6d9d7b60afb4279134e414c7ed-merged.mount: Deactivated successfully.
Jan 21 14:05:50 compute-0 podman[241515]: 2026-01-21 14:05:50.890964731 +0000 UTC m=+0.520585783 container remove 0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 14:05:50 compute-0 systemd[1]: libpod-conmon-0936cc05ddb3928a97e5e5c62def3393e05efd3bc0a82cbb6635a8e87de769a8.scope: Deactivated successfully.
Jan 21 14:05:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:50 compute-0 sudo[241439]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:51 compute-0 sudo[241555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:05:51 compute-0 sudo[241555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:51 compute-0 sudo[241555]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:51 compute-0 sudo[241580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:05:51 compute-0 sudo[241580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:51 compute-0 ceph-mon[75031]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.368664851 +0000 UTC m=+0.047878026 container create deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 14:05:51 compute-0 systemd[1]: Started libpod-conmon-deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18.scope.
Jan 21 14:05:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.431237686 +0000 UTC m=+0.110450851 container init deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.437480659 +0000 UTC m=+0.116693794 container start deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 14:05:51 compute-0 magical_raman[241635]: 167 167
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.441777234 +0000 UTC m=+0.120990389 container attach deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 14:05:51 compute-0 systemd[1]: libpod-deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18.scope: Deactivated successfully.
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.347928632 +0000 UTC m=+0.027141797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.442902502 +0000 UTC m=+0.122115637 container died deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 21 14:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ee3bbc3ece198749529d5c7ff44b6ccdfa89d7e45416d1585cc0a8adfbae2f8-merged.mount: Deactivated successfully.
Jan 21 14:05:51 compute-0 podman[241617]: 2026-01-21 14:05:51.481275084 +0000 UTC m=+0.160488219 container remove deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:05:51 compute-0 podman[241631]: 2026-01-21 14:05:51.486211635 +0000 UTC m=+0.081399268 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 21 14:05:51 compute-0 systemd[1]: libpod-conmon-deae2ef8b7e9b8309ebfa48397b9ede91409780e7b1e2cf0e29eb9f358c40a18.scope: Deactivated successfully.
Jan 21 14:05:51 compute-0 podman[241634]: 2026-01-21 14:05:51.512375967 +0000 UTC m=+0.101602764 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 14:05:51 compute-0 podman[241699]: 2026-01-21 14:05:51.651833558 +0000 UTC m=+0.055128224 container create 8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wozniak, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 14:05:51 compute-0 systemd[1]: Started libpod-conmon-8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca.scope.
Jan 21 14:05:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:05:51 compute-0 podman[241699]: 2026-01-21 14:05:51.63153345 +0000 UTC m=+0.034828146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f888263e8eefbf49c012e55b7955243266c2a9e86160a1131526ac28387e88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f888263e8eefbf49c012e55b7955243266c2a9e86160a1131526ac28387e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f888263e8eefbf49c012e55b7955243266c2a9e86160a1131526ac28387e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f888263e8eefbf49c012e55b7955243266c2a9e86160a1131526ac28387e88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:05:51 compute-0 podman[241699]: 2026-01-21 14:05:51.736778762 +0000 UTC m=+0.140073448 container init 8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wozniak, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:05:51 compute-0 podman[241699]: 2026-01-21 14:05:51.74569282 +0000 UTC m=+0.148987466 container start 8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wozniak, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 14:05:51 compute-0 podman[241699]: 2026-01-21 14:05:51.749656998 +0000 UTC m=+0.152951674 container attach 8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 14:05:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:52 compute-0 lvm[241793]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:05:52 compute-0 lvm[241793]: VG ceph_vg0 finished
Jan 21 14:05:52 compute-0 lvm[241794]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:05:52 compute-0 lvm[241794]: VG ceph_vg1 finished
Jan 21 14:05:52 compute-0 lvm[241796]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:05:52 compute-0 lvm[241796]: VG ceph_vg2 finished
Jan 21 14:05:52 compute-0 stupefied_wozniak[241715]: {}
Jan 21 14:05:52 compute-0 systemd[1]: libpod-8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca.scope: Deactivated successfully.
Jan 21 14:05:52 compute-0 systemd[1]: libpod-8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca.scope: Consumed 1.225s CPU time.
Jan 21 14:05:52 compute-0 podman[241699]: 2026-01-21 14:05:52.524925168 +0000 UTC m=+0.928219864 container died 8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wozniak, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 14:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-37f888263e8eefbf49c012e55b7955243266c2a9e86160a1131526ac28387e88-merged.mount: Deactivated successfully.
Jan 21 14:05:52 compute-0 podman[241699]: 2026-01-21 14:05:52.572314601 +0000 UTC m=+0.975609257 container remove 8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wozniak, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 14:05:52 compute-0 systemd[1]: libpod-conmon-8db23cd7bea05ead78a69125b9951b73a5a443e62010976db2f52493bb88d0ca.scope: Deactivated successfully.
Jan 21 14:05:52 compute-0 sudo[241580]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:05:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:05:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:52 compute-0 sudo[241811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:05:52 compute-0 sudo[241811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:05:52 compute-0 sudo[241811]: pam_unix(sudo:session): session closed for user root
Jan 21 14:05:53 compute-0 ceph-mon[75031]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:05:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:54 compute-0 ceph-mon[75031]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:05:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:57 compute-0 ceph-mon[75031]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:57 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:05:57.673 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:05:57 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:05:57.674 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:05:57 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:05:57.675 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:05:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:05:59 compute-0 ceph-mon[75031]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:01 compute-0 ceph-mon[75031]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:03 compute-0 ceph-mon[75031]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:05 compute-0 ceph-mon[75031]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:07 compute-0 ceph-mon[75031]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:09 compute-0 ceph-mon[75031]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:06:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:06:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:06:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:06:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:06:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:06:11 compute-0 ceph-mon[75031]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:12 compute-0 ceph-mon[75031]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:15 compute-0 ceph-mon[75031]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:17 compute-0 ceph-mon[75031]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:19 compute-0 ceph-mon[75031]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:21 compute-0 ceph-mon[75031]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.456299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004381456354, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1423, "num_deletes": 251, "total_data_size": 2266448, "memory_usage": 2300672, "flush_reason": "Manual Compaction"}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004381481056, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2223233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14930, "largest_seqno": 16352, "table_properties": {"data_size": 2216592, "index_size": 3776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13776, "raw_average_key_size": 19, "raw_value_size": 2203262, "raw_average_value_size": 3152, "num_data_blocks": 173, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004232, "oldest_key_time": 1769004232, "file_creation_time": 1769004381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 24798 microseconds, and 4928 cpu microseconds.
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.481102) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2223233 bytes OK
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.481123) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.484191) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.484208) EVENT_LOG_v1 {"time_micros": 1769004381484203, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.484225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2260148, prev total WAL file size 2260148, number of live WAL files 2.
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.485107) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2171KB)], [35(7369KB)]
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004381485140, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9769812, "oldest_snapshot_seqno": -1}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4021 keys, 7966260 bytes, temperature: kUnknown
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004381555740, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7966260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7937168, "index_size": 17910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98228, "raw_average_key_size": 24, "raw_value_size": 7862253, "raw_average_value_size": 1955, "num_data_blocks": 758, "num_entries": 4021, "num_filter_entries": 4021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.556008) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7966260 bytes
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.558448) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.2 rd, 112.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.2 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4535, records dropped: 514 output_compression: NoCompression
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.558473) EVENT_LOG_v1 {"time_micros": 1769004381558460, "job": 16, "event": "compaction_finished", "compaction_time_micros": 70691, "compaction_time_cpu_micros": 20079, "output_level": 6, "num_output_files": 1, "total_output_size": 7966260, "num_input_records": 4535, "num_output_records": 4021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004381559140, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004381561107, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.485044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.561258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.561265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.561268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.561271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:06:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:06:21.561274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:06:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:22 compute-0 podman[241837]: 2026-01-21 14:06:22.369347224 +0000 UTC m=+0.081482550 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 21 14:06:22 compute-0 podman[241836]: 2026-01-21 14:06:22.419402332 +0000 UTC m=+0.131560629 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 21 14:06:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:06:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725423415' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:06:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:06:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725423415' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:06:23 compute-0 ceph-mon[75031]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2725423415' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:06:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2725423415' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:06:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:25 compute-0 ceph-mon[75031]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:27 compute-0 ceph-mon[75031]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:29 compute-0 ceph-mon[75031]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:30 compute-0 ceph-mon[75031]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:33 compute-0 ceph-mon[75031]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:06:33.896 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:06:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:06:33.897 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:06:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:06:33.897 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:06:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 21 14:06:35 compute-0 ceph-mon[75031]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 21 14:06:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:37 compute-0 ceph-mon[75031]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.206 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.207 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.207 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.207 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:06:39 compute-0 ceph-mon[75031]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:06:39
Jan 21 14:06:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:06:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:06:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Jan 21 14:06:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.693 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.694 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.695 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.695 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.696 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.696 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.696 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.697 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.774 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.775 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.775 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.775 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:06:39 compute-0 nova_compute[239261]: 2026-01-21 14:06:39.775 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:06:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:06:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053244310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:06:40 compute-0 nova_compute[239261]: 2026-01-21 14:06:40.408 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:06:40 compute-0 nova_compute[239261]: 2026-01-21 14:06:40.577 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:06:40 compute-0 nova_compute[239261]: 2026-01-21 14:06:40.579 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:06:40 compute-0 nova_compute[239261]: 2026-01-21 14:06:40.579 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:06:40 compute-0 nova_compute[239261]: 2026-01-21 14:06:40.579 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:06:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:06:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:06:41 compute-0 ceph-mon[75031]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1053244310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.330 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.331 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.353 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:06:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:06:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525636412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.945 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.950 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.978 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.979 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:06:41 compute-0 nova_compute[239261]: 2026-01-21 14:06:41.980 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:06:42 compute-0 nova_compute[239261]: 2026-01-21 14:06:42.008 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:06:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:42 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/525636412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:06:43 compute-0 ceph-mon[75031]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:45 compute-0 ceph-mon[75031]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:06:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 21 14:06:47 compute-0 ceph-mon[75031]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 21 14:06:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:49 compute-0 ceph-mon[75031]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:06:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:06:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:51 compute-0 ceph-mon[75031]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:52 compute-0 sudo[241927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:06:52 compute-0 sudo[241927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:52 compute-0 sudo[241927]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:52 compute-0 podman[241952]: 2026-01-21 14:06:52.873605628 +0000 UTC m=+0.049362002 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 21 14:06:52 compute-0 sudo[241964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 14:06:52 compute-0 sudo[241964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:52 compute-0 podman[241951]: 2026-01-21 14:06:52.907211112 +0000 UTC m=+0.086684777 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 21 14:06:53 compute-0 ceph-mon[75031]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:53 compute-0 podman[242064]: 2026-01-21 14:06:53.378375601 +0000 UTC m=+0.085442627 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:06:53 compute-0 podman[242064]: 2026-01-21 14:06:53.469394844 +0000 UTC m=+0.176461850 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:06:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:54 compute-0 sudo[241964]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:06:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:06:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:06:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:06:54 compute-0 sudo[242250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:06:54 compute-0 sudo[242250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:54 compute-0 sudo[242250]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:54 compute-0 sudo[242275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:06:54 compute-0 sudo[242275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:55 compute-0 sudo[242275]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:06:55 compute-0 sudo[242331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:06:55 compute-0 sudo[242331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:55 compute-0 sudo[242331]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:55 compute-0 sudo[242356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:06:55 compute-0 sudo[242356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:55 compute-0 ceph-mon[75031]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:06:55 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:06:55 compute-0 rsyslogd[1002]: imjournal: 3588 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 21 14:06:55 compute-0 podman[242393]: 2026-01-21 14:06:55.80066632 +0000 UTC m=+0.069813453 container create 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:06:55 compute-0 systemd[1]: Started libpod-conmon-7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279.scope.
Jan 21 14:06:55 compute-0 podman[242393]: 2026-01-21 14:06:55.774322754 +0000 UTC m=+0.043469957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:06:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:06:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:06:55 compute-0 podman[242393]: 2026-01-21 14:06:55.968894918 +0000 UTC m=+0.238042051 container init 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:06:55 compute-0 podman[242393]: 2026-01-21 14:06:55.981040485 +0000 UTC m=+0.250187608 container start 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:06:55 compute-0 podman[242393]: 2026-01-21 14:06:55.98570573 +0000 UTC m=+0.254852873 container attach 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:06:55 compute-0 xenodochial_ellis[242409]: 167 167
Jan 21 14:06:55 compute-0 systemd[1]: libpod-7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279.scope: Deactivated successfully.
Jan 21 14:06:55 compute-0 podman[242393]: 2026-01-21 14:06:55.989747448 +0000 UTC m=+0.258894581 container died 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ee760162e3a04b8f6f820fc43eb925166ec459eaf9fe3e2848e4117a55c88a6-merged.mount: Deactivated successfully.
Jan 21 14:06:56 compute-0 podman[242393]: 2026-01-21 14:06:56.052286763 +0000 UTC m=+0.321433886 container remove 7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:06:56 compute-0 systemd[1]: libpod-conmon-7997876eda8e5e12b806477964299a4a9831fe6db707929ab93b0ba17e000279.scope: Deactivated successfully.
Jan 21 14:06:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:56 compute-0 podman[242433]: 2026-01-21 14:06:56.258710568 +0000 UTC m=+0.065711013 container create 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:06:56 compute-0 systemd[1]: Started libpod-conmon-7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861.scope.
Jan 21 14:06:56 compute-0 podman[242433]: 2026-01-21 14:06:56.214128244 +0000 UTC m=+0.021128699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:06:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:56 compute-0 podman[242433]: 2026-01-21 14:06:56.347509726 +0000 UTC m=+0.154510191 container init 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:06:56 compute-0 podman[242433]: 2026-01-21 14:06:56.358801893 +0000 UTC m=+0.165802328 container start 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:06:56 compute-0 podman[242433]: 2026-01-21 14:06:56.363353355 +0000 UTC m=+0.170353810 container attach 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 21 14:06:56 compute-0 friendly_mayer[242450]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:06:56 compute-0 friendly_mayer[242450]: --> All data devices are unavailable
Jan 21 14:06:56 compute-0 systemd[1]: libpod-7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861.scope: Deactivated successfully.
Jan 21 14:06:56 compute-0 podman[242433]: 2026-01-21 14:06:56.985256922 +0000 UTC m=+0.792257427 container died 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-32c59ff6ef0a1ffd2184f23d31d233b84319c50969b4d6a144eceb64c098bec7-merged.mount: Deactivated successfully.
Jan 21 14:06:57 compute-0 podman[242433]: 2026-01-21 14:06:57.042702712 +0000 UTC m=+0.849703147 container remove 7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:06:57 compute-0 systemd[1]: libpod-conmon-7831ebfe5a50d0b4c93d1d7756e968b6ed601a0b6ac1bb568260e50f56238861.scope: Deactivated successfully.
Jan 21 14:06:57 compute-0 sudo[242356]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:57 compute-0 sudo[242481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:06:57 compute-0 sudo[242481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:57 compute-0 sudo[242481]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:57 compute-0 sudo[242506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:06:57 compute-0 sudo[242506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:57 compute-0 ceph-mon[75031]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.504393279 +0000 UTC m=+0.042143585 container create 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:06:57 compute-0 systemd[1]: Started libpod-conmon-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope.
Jan 21 14:06:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.486986102 +0000 UTC m=+0.024736428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.587124769 +0000 UTC m=+0.124875055 container init 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.593462115 +0000 UTC m=+0.131212391 container start 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.596740155 +0000 UTC m=+0.134490421 container attach 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:06:57 compute-0 systemd[1]: libpod-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope: Deactivated successfully.
Jan 21 14:06:57 compute-0 tender_brown[242561]: 167 167
Jan 21 14:06:57 compute-0 conmon[242561]: conmon 8e475f7a6301f94f2ea1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope/container/memory.events
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.598363364 +0000 UTC m=+0.136113640 container died 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6590edf33a4743fcc22988c4eba794ed0b6df0b77f2fc6b6068fca0f43750698-merged.mount: Deactivated successfully.
Jan 21 14:06:57 compute-0 podman[242544]: 2026-01-21 14:06:57.63771101 +0000 UTC m=+0.175461286 container remove 8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 14:06:57 compute-0 systemd[1]: libpod-conmon-8e475f7a6301f94f2ea108010e61e4602e2ccb653700273e19edf8991c924307.scope: Deactivated successfully.
Jan 21 14:06:57 compute-0 podman[242583]: 2026-01-21 14:06:57.875044993 +0000 UTC m=+0.073696800 container create c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 14:06:57 compute-0 systemd[1]: Started libpod-conmon-c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7.scope.
Jan 21 14:06:57 compute-0 podman[242583]: 2026-01-21 14:06:57.843073708 +0000 UTC m=+0.041725595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:06:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:57 compute-0 podman[242583]: 2026-01-21 14:06:57.976886021 +0000 UTC m=+0.175537878 container init c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:06:57 compute-0 podman[242583]: 2026-01-21 14:06:57.987103372 +0000 UTC m=+0.185755199 container start c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:06:57 compute-0 podman[242583]: 2026-01-21 14:06:57.992401282 +0000 UTC m=+0.191053109 container attach c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:06:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]: {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:     "0": [
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:         {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "devices": [
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "/dev/loop3"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             ],
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_name": "ceph_lv0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_size": "21470642176",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "name": "ceph_lv0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "tags": {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cluster_name": "ceph",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.crush_device_class": "",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.encrypted": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.objectstore": "bluestore",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osd_id": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.type": "block",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.vdo": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.with_tpm": "0"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             },
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "type": "block",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "vg_name": "ceph_vg0"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:         }
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:     ],
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:     "1": [
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:         {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "devices": [
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "/dev/loop4"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             ],
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_name": "ceph_lv1",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_size": "21470642176",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "name": "ceph_lv1",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "tags": {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cluster_name": "ceph",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.crush_device_class": "",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.encrypted": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.objectstore": "bluestore",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osd_id": "1",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.type": "block",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.vdo": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.with_tpm": "0"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             },
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "type": "block",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "vg_name": "ceph_vg1"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:         }
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:     ],
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:     "2": [
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:         {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "devices": [
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "/dev/loop5"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             ],
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_name": "ceph_lv2",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_size": "21470642176",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "name": "ceph_lv2",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "tags": {
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.cluster_name": "ceph",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.crush_device_class": "",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.encrypted": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.objectstore": "bluestore",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osd_id": "2",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.type": "block",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.vdo": "0",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:                 "ceph.with_tpm": "0"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             },
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "type": "block",
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:             "vg_name": "ceph_vg2"
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:         }
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]:     ]
Jan 21 14:06:58 compute-0 competent_mendeleev[242600]: }
Jan 21 14:06:58 compute-0 systemd[1]: libpod-c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7.scope: Deactivated successfully.
Jan 21 14:06:58 compute-0 podman[242583]: 2026-01-21 14:06:58.334608377 +0000 UTC m=+0.533260234 container died c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8258b65e87196343693ece3622252a999f682a8a081df63ca5bf00dde223fd6-merged.mount: Deactivated successfully.
Jan 21 14:06:58 compute-0 podman[242583]: 2026-01-21 14:06:58.381934699 +0000 UTC m=+0.580586486 container remove c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:06:58 compute-0 systemd[1]: libpod-conmon-c0668234d99ef13d3bc3c140d023f5e4a1dc3788b5f0f75f1d2d5d32d3c19af7.scope: Deactivated successfully.
Jan 21 14:06:58 compute-0 sudo[242506]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:58 compute-0 sudo[242622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:06:58 compute-0 sudo[242622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:58 compute-0 sudo[242622]: pam_unix(sudo:session): session closed for user root
Jan 21 14:06:58 compute-0 sudo[242647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:06:58 compute-0 sudo[242647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:06:58 compute-0 podman[242684]: 2026-01-21 14:06:58.875063777 +0000 UTC m=+0.065363665 container create 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:06:58 compute-0 systemd[1]: Started libpod-conmon-458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8.scope.
Jan 21 14:06:58 compute-0 podman[242684]: 2026-01-21 14:06:58.848217658 +0000 UTC m=+0.038517626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:06:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:06:58 compute-0 podman[242684]: 2026-01-21 14:06:58.970432126 +0000 UTC m=+0.160732044 container init 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:06:58 compute-0 podman[242684]: 2026-01-21 14:06:58.982893252 +0000 UTC m=+0.173193130 container start 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 14:06:58 compute-0 amazing_mayer[242700]: 167 167
Jan 21 14:06:58 compute-0 systemd[1]: libpod-458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8.scope: Deactivated successfully.
Jan 21 14:06:58 compute-0 podman[242684]: 2026-01-21 14:06:58.987038904 +0000 UTC m=+0.177338892 container attach 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:06:58 compute-0 podman[242684]: 2026-01-21 14:06:58.987733612 +0000 UTC m=+0.178033490 container died 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 14:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cd080c3785eb7b9ceb7ac3b09a7c826d06d65a72f9cd61ff09a025e1926df19-merged.mount: Deactivated successfully.
Jan 21 14:06:59 compute-0 podman[242684]: 2026-01-21 14:06:59.033689578 +0000 UTC m=+0.223989456 container remove 458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:06:59 compute-0 systemd[1]: libpod-conmon-458e772c3083573ecaebe33a550e95e5f2739224f65a710ae6ce283891b090f8.scope: Deactivated successfully.
Jan 21 14:06:59 compute-0 podman[242724]: 2026-01-21 14:06:59.232212359 +0000 UTC m=+0.071877614 container create e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:06:59 compute-0 systemd[1]: Started libpod-conmon-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope.
Jan 21 14:06:59 compute-0 podman[242724]: 2026-01-21 14:06:59.199787514 +0000 UTC m=+0.039452779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:06:59 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:06:59 compute-0 podman[242724]: 2026-01-21 14:06:59.328250846 +0000 UTC m=+0.167916081 container init e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:06:59 compute-0 podman[242724]: 2026-01-21 14:06:59.334688963 +0000 UTC m=+0.174354178 container start e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:06:59 compute-0 podman[242724]: 2026-01-21 14:06:59.338877586 +0000 UTC m=+0.178542811 container attach e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:06:59 compute-0 ceph-mon[75031]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:06:59 compute-0 lvm[242820]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:06:59 compute-0 lvm[242820]: VG ceph_vg1 finished
Jan 21 14:06:59 compute-0 lvm[242819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:06:59 compute-0 lvm[242819]: VG ceph_vg0 finished
Jan 21 14:07:00 compute-0 lvm[242822]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:07:00 compute-0 lvm[242822]: VG ceph_vg2 finished
Jan 21 14:07:00 compute-0 magical_goodall[242741]: {}
Jan 21 14:07:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:00 compute-0 podman[242724]: 2026-01-21 14:07:00.146169522 +0000 UTC m=+0.985834747 container died e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:07:00 compute-0 systemd[1]: libpod-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope: Deactivated successfully.
Jan 21 14:07:00 compute-0 systemd[1]: libpod-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope: Consumed 1.317s CPU time.
Jan 21 14:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b19a8dc2abd20c119fa4d04ba39c39898108eb8b0a584f6ba75e17bdf83d49bc-merged.mount: Deactivated successfully.
Jan 21 14:07:00 compute-0 podman[242724]: 2026-01-21 14:07:00.348428244 +0000 UTC m=+1.188093469 container remove e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 14:07:00 compute-0 systemd[1]: libpod-conmon-e19111413c4377793f32a5692e32719369a9e25a09d7bfd960ed5603d86617a2.scope: Deactivated successfully.
Jan 21 14:07:00 compute-0 sudo[242647]: pam_unix(sudo:session): session closed for user root
Jan 21 14:07:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:07:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:07:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:07:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:07:00 compute-0 sudo[242837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:07:00 compute-0 sudo[242837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:07:00 compute-0 sudo[242837]: pam_unix(sudo:session): session closed for user root
Jan 21 14:07:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:01 compute-0 ceph-mon[75031]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:07:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:07:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:03 compute-0 ceph-mon[75031]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:05 compute-0 ceph-mon[75031]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:07 compute-0 ceph-mon[75031]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:09 compute-0 ceph-mon[75031]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:07:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:07:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:07:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:07:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:07:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:07:11 compute-0 ceph-mon[75031]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:13 compute-0 ceph-mon[75031]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:15 compute-0 ceph-mon[75031]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:17 compute-0 ceph-mon[75031]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:19 compute-0 ceph-mon[75031]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:21 compute-0 ceph-mon[75031]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:22 compute-0 ceph-mon[75031]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:07:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079051532' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:07:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:07:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079051532' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:07:23 compute-0 podman[242863]: 2026-01-21 14:07:23.344770218 +0000 UTC m=+0.069149358 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 14:07:23 compute-0 podman[242862]: 2026-01-21 14:07:23.374645184 +0000 UTC m=+0.098098551 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 21 14:07:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3079051532' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:07:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3079051532' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:07:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:24 compute-0 ceph-mon[75031]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:27 compute-0 ceph-mon[75031]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:29 compute-0 ceph-mon[75031]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:31 compute-0 ceph-mon[75031]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:33 compute-0 ceph-mon[75031]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:07:33.897 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:07:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:07:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:07:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:07:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:07:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:35 compute-0 ceph-mon[75031]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:35 compute-0 nova_compute[239261]: 2026-01-21 14:07:35.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:37 compute-0 ceph-mon[75031]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.749 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.749 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.750 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.750 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.775 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.776 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.776 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.777 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:07:37 compute-0 nova_compute[239261]: 2026-01-21 14:07:37.777 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:07:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:07:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761831223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.336 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.517 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.519 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.519 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:07:38 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1761831223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.520 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.616 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.617 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:07:38 compute-0 nova_compute[239261]: 2026-01-21 14:07:38.635 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:07:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:07:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282412038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:07:39 compute-0 nova_compute[239261]: 2026-01-21 14:07:39.187 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:07:39 compute-0 nova_compute[239261]: 2026-01-21 14:07:39.193 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:07:39 compute-0 nova_compute[239261]: 2026-01-21 14:07:39.240 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:07:39 compute-0 nova_compute[239261]: 2026-01-21 14:07:39.243 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:07:39 compute-0 nova_compute[239261]: 2026-01-21 14:07:39.243 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:07:39 compute-0 ceph-mon[75031]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:39 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/282412038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:07:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:07:39
Jan 21 14:07:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:07:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:07:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 21 14:07:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:07:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:40 compute-0 nova_compute[239261]: 2026-01-21 14:07:40.218 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:40 compute-0 nova_compute[239261]: 2026-01-21 14:07:40.219 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:40 compute-0 nova_compute[239261]: 2026-01-21 14:07:40.219 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:40 compute-0 nova_compute[239261]: 2026-01-21 14:07:40.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:40 compute-0 nova_compute[239261]: 2026-01-21 14:07:40.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:07:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:07:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:07:41 compute-0 ceph-mon[75031]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:43 compute-0 ceph-mon[75031]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:45 compute-0 ceph-mon[75031]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:47 compute-0 ceph-mon[75031]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:49 compute-0 ceph-mon[75031]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:07:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:07:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:51 compute-0 ceph-mon[75031]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:53 compute-0 ceph-mon[75031]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:54 compute-0 podman[242953]: 2026-01-21 14:07:54.379432145 +0000 UTC m=+0.097424494 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 14:07:54 compute-0 podman[242952]: 2026-01-21 14:07:54.389239079 +0000 UTC m=+0.105429333 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:07:55 compute-0 ceph-mon[75031]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:07:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:57 compute-0 ceph-mon[75031]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:07:58 compute-0 ceph-mon[75031]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:00 compute-0 sudo[242999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:08:00 compute-0 sudo[242999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:00 compute-0 sudo[242999]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:00 compute-0 sudo[243024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:08:00 compute-0 sudo[243024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:01 compute-0 ceph-mon[75031]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:01 compute-0 sudo[243024]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:08:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:08:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:08:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:08:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:08:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:08:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:08:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:08:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:08:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:08:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:08:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:08:01 compute-0 sudo[243080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:08:01 compute-0 sudo[243080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:01 compute-0 sudo[243080]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:01 compute-0 sudo[243105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:08:01 compute-0 sudo[243105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:01 compute-0 podman[243142]: 2026-01-21 14:08:01.944378818 +0000 UTC m=+0.071067436 container create 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 14:08:01 compute-0 systemd[1]: Started libpod-conmon-1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9.scope.
Jan 21 14:08:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:08:02 compute-0 podman[243142]: 2026-01-21 14:08:01.919803105 +0000 UTC m=+0.046491743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:08:02 compute-0 podman[243142]: 2026-01-21 14:08:02.0333759 +0000 UTC m=+0.160064518 container init 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:08:02 compute-0 podman[243142]: 2026-01-21 14:08:02.042235161 +0000 UTC m=+0.168923759 container start 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:08:02 compute-0 podman[243142]: 2026-01-21 14:08:02.045807211 +0000 UTC m=+0.172495789 container attach 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 14:08:02 compute-0 strange_blackwell[243158]: 167 167
Jan 21 14:08:02 compute-0 systemd[1]: libpod-1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9.scope: Deactivated successfully.
Jan 21 14:08:02 compute-0 podman[243142]: 2026-01-21 14:08:02.049373019 +0000 UTC m=+0.176061607 container died 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 14:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-76fd31376504dff5d799e2649246d3948fa5cfa3028ae644c4734b629d82eea8-merged.mount: Deactivated successfully.
Jan 21 14:08:02 compute-0 podman[243142]: 2026-01-21 14:08:02.097802359 +0000 UTC m=+0.224490937 container remove 1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_blackwell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 21 14:08:02 compute-0 systemd[1]: libpod-conmon-1b1d2b79d3a6db927349266e802e93933a2c62da47751cc1b786a6e9f36eadd9.scope: Deactivated successfully.
Jan 21 14:08:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:08:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:08:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:08:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:08:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:08:02 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.346273923 +0000 UTC m=+0.070668115 container create 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:08:02 compute-0 systemd[1]: Started libpod-conmon-45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9.scope.
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.314402457 +0000 UTC m=+0.038796779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:08:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.458230908 +0000 UTC m=+0.182625180 container init 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.468502705 +0000 UTC m=+0.192896897 container start 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.474138695 +0000 UTC m=+0.198532887 container attach 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:08:02 compute-0 heuristic_williamson[243196]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:08:02 compute-0 heuristic_williamson[243196]: --> All data devices are unavailable
Jan 21 14:08:02 compute-0 systemd[1]: libpod-45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9.scope: Deactivated successfully.
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.94128533 +0000 UTC m=+0.665679522 container died 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 21 14:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36c8296306b69b4566fb03ad1fa2ba5ddee720b52e83094bd8bb7816c02123e-merged.mount: Deactivated successfully.
Jan 21 14:08:02 compute-0 podman[243179]: 2026-01-21 14:08:02.984310645 +0000 UTC m=+0.708704837 container remove 45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:08:02 compute-0 systemd[1]: libpod-conmon-45493026c58f6f29301e95bf28b92da6529ddb7e53de5fa192cae1fc6aa71cb9.scope: Deactivated successfully.
Jan 21 14:08:03 compute-0 sudo[243105]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:03 compute-0 sudo[243226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:08:03 compute-0 sudo[243226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:03 compute-0 sudo[243226]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:03 compute-0 sudo[243251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:08:03 compute-0 sudo[243251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:03 compute-0 ceph-mon[75031]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.532468861 +0000 UTC m=+0.071027284 container create b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 14:08:03 compute-0 systemd[1]: Started libpod-conmon-b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5.scope.
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.503520449 +0000 UTC m=+0.042078922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:08:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.637130685 +0000 UTC m=+0.175689108 container init b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.644338025 +0000 UTC m=+0.182896408 container start b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 21 14:08:03 compute-0 busy_stonebraker[243304]: 167 167
Jan 21 14:08:03 compute-0 systemd[1]: libpod-b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5.scope: Deactivated successfully.
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.648809846 +0000 UTC m=+0.187368339 container attach b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.650301673 +0000 UTC m=+0.188860096 container died b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 14:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-490c098f01333b9c875593af7ff61ce2b44c1531fb61ea554d5f29ebf5e0a540-merged.mount: Deactivated successfully.
Jan 21 14:08:03 compute-0 podman[243288]: 2026-01-21 14:08:03.715041781 +0000 UTC m=+0.253600204 container remove b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 14:08:03 compute-0 systemd[1]: libpod-conmon-b381ff7507fff826779d238562def1df00bdae5777d5f616f5eb55d625da8ea5.scope: Deactivated successfully.
Jan 21 14:08:03 compute-0 podman[243330]: 2026-01-21 14:08:03.949769571 +0000 UTC m=+0.075073285 container create 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:08:03 compute-0 systemd[1]: Started libpod-conmon-63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd.scope.
Jan 21 14:08:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:08:04 compute-0 podman[243330]: 2026-01-21 14:08:03.919413314 +0000 UTC m=+0.044717118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:04 compute-0 podman[243330]: 2026-01-21 14:08:04.039115982 +0000 UTC m=+0.164419726 container init 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:08:04 compute-0 podman[243330]: 2026-01-21 14:08:04.04581675 +0000 UTC m=+0.171120464 container start 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 14:08:04 compute-0 podman[243330]: 2026-01-21 14:08:04.049196304 +0000 UTC m=+0.174500048 container attach 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:08:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:04 compute-0 great_shaw[243346]: {
Jan 21 14:08:04 compute-0 great_shaw[243346]:     "0": [
Jan 21 14:08:04 compute-0 great_shaw[243346]:         {
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "devices": [
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "/dev/loop3"
Jan 21 14:08:04 compute-0 great_shaw[243346]:             ],
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_name": "ceph_lv0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_size": "21470642176",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "name": "ceph_lv0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "tags": {
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cluster_name": "ceph",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.crush_device_class": "",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.encrypted": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.objectstore": "bluestore",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osd_id": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.type": "block",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.vdo": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.with_tpm": "0"
Jan 21 14:08:04 compute-0 great_shaw[243346]:             },
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "type": "block",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "vg_name": "ceph_vg0"
Jan 21 14:08:04 compute-0 great_shaw[243346]:         }
Jan 21 14:08:04 compute-0 great_shaw[243346]:     ],
Jan 21 14:08:04 compute-0 great_shaw[243346]:     "1": [
Jan 21 14:08:04 compute-0 great_shaw[243346]:         {
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "devices": [
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "/dev/loop4"
Jan 21 14:08:04 compute-0 great_shaw[243346]:             ],
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_name": "ceph_lv1",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_size": "21470642176",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "name": "ceph_lv1",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "tags": {
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cluster_name": "ceph",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.crush_device_class": "",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.encrypted": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.objectstore": "bluestore",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osd_id": "1",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.type": "block",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.vdo": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.with_tpm": "0"
Jan 21 14:08:04 compute-0 great_shaw[243346]:             },
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "type": "block",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "vg_name": "ceph_vg1"
Jan 21 14:08:04 compute-0 great_shaw[243346]:         }
Jan 21 14:08:04 compute-0 great_shaw[243346]:     ],
Jan 21 14:08:04 compute-0 great_shaw[243346]:     "2": [
Jan 21 14:08:04 compute-0 great_shaw[243346]:         {
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "devices": [
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "/dev/loop5"
Jan 21 14:08:04 compute-0 great_shaw[243346]:             ],
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_name": "ceph_lv2",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_size": "21470642176",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "name": "ceph_lv2",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "tags": {
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.cluster_name": "ceph",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.crush_device_class": "",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.encrypted": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.objectstore": "bluestore",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osd_id": "2",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.type": "block",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.vdo": "0",
Jan 21 14:08:04 compute-0 great_shaw[243346]:                 "ceph.with_tpm": "0"
Jan 21 14:08:04 compute-0 great_shaw[243346]:             },
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "type": "block",
Jan 21 14:08:04 compute-0 great_shaw[243346]:             "vg_name": "ceph_vg2"
Jan 21 14:08:04 compute-0 great_shaw[243346]:         }
Jan 21 14:08:04 compute-0 great_shaw[243346]:     ]
Jan 21 14:08:04 compute-0 great_shaw[243346]: }
Jan 21 14:08:04 compute-0 systemd[1]: libpod-63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd.scope: Deactivated successfully.
Jan 21 14:08:04 compute-0 podman[243355]: 2026-01-21 14:08:04.414452834 +0000 UTC m=+0.026693587 container died 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 14:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b1bac90e5328fe5c1b21c3fd87e6596d40ebe4656370c39b564cfb4ac00976d-merged.mount: Deactivated successfully.
Jan 21 14:08:04 compute-0 podman[243355]: 2026-01-21 14:08:04.474268708 +0000 UTC m=+0.086509431 container remove 63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 21 14:08:04 compute-0 systemd[1]: libpod-conmon-63014992e5475795bb6851f2d87742b3d85dd3dcb344a233cded24d85ce800dd.scope: Deactivated successfully.
Jan 21 14:08:04 compute-0 sudo[243251]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:04 compute-0 sudo[243370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:08:04 compute-0 sudo[243370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:04 compute-0 sudo[243370]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:04 compute-0 sudo[243395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:08:04 compute-0 sudo[243395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.063155782 +0000 UTC m=+0.067432485 container create cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 14:08:05 compute-0 systemd[1]: Started libpod-conmon-cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5.scope.
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.036838095 +0000 UTC m=+0.041114818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:08:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.172716958 +0000 UTC m=+0.176993681 container init cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.187609449 +0000 UTC m=+0.191886142 container start cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.192512532 +0000 UTC m=+0.196789245 container attach cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:08:05 compute-0 upbeat_golick[243448]: 167 167
Jan 21 14:08:05 compute-0 systemd[1]: libpod-cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5.scope: Deactivated successfully.
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.194768859 +0000 UTC m=+0.199045572 container died cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-aba33f38f9567959acb77ac2c498a2a7f22dec7764ecacd933fa01fb889e05cf-merged.mount: Deactivated successfully.
Jan 21 14:08:05 compute-0 podman[243431]: 2026-01-21 14:08:05.255147496 +0000 UTC m=+0.259424179 container remove cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:08:05 compute-0 systemd[1]: libpod-conmon-cd0930d4ebf3e4b8544a67e0e26a4b45113c4a4f3b3698bd560eb0347018b9a5.scope: Deactivated successfully.
Jan 21 14:08:05 compute-0 podman[243471]: 2026-01-21 14:08:05.443891868 +0000 UTC m=+0.055109087 container create e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:08:05 compute-0 systemd[1]: Started libpod-conmon-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope.
Jan 21 14:08:05 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:08:05 compute-0 podman[243471]: 2026-01-21 14:08:05.426978476 +0000 UTC m=+0.038195675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:08:05 compute-0 podman[243471]: 2026-01-21 14:08:05.52524266 +0000 UTC m=+0.136459889 container init e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 14:08:05 compute-0 podman[243471]: 2026-01-21 14:08:05.530568292 +0000 UTC m=+0.141785501 container start e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 21 14:08:05 compute-0 ceph-mon[75031]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:05 compute-0 podman[243471]: 2026-01-21 14:08:05.535820254 +0000 UTC m=+0.147037473 container attach e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 14:08:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:06 compute-0 lvm[243569]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:08:06 compute-0 lvm[243569]: VG ceph_vg2 finished
Jan 21 14:08:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:06 compute-0 lvm[243567]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:08:06 compute-0 lvm[243567]: VG ceph_vg1 finished
Jan 21 14:08:06 compute-0 lvm[243566]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:08:06 compute-0 lvm[243566]: VG ceph_vg0 finished
Jan 21 14:08:06 compute-0 musing_lichterman[243487]: {}
Jan 21 14:08:06 compute-0 systemd[1]: libpod-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope: Deactivated successfully.
Jan 21 14:08:06 compute-0 systemd[1]: libpod-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope: Consumed 1.213s CPU time.
Jan 21 14:08:06 compute-0 podman[243471]: 2026-01-21 14:08:06.291135574 +0000 UTC m=+0.902352773 container died e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 21 14:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d7474309ca22c6403b57d36ad8c337c3bce06d88ff1491c5cf77f10d54901e-merged.mount: Deactivated successfully.
Jan 21 14:08:06 compute-0 podman[243471]: 2026-01-21 14:08:06.334767224 +0000 UTC m=+0.945984433 container remove e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lichterman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:08:06 compute-0 systemd[1]: libpod-conmon-e8303912940c6aeb151e69b87335b7fbe5b94cd54e26b64a2fe30a21d1fdb392.scope: Deactivated successfully.
Jan 21 14:08:06 compute-0 sudo[243395]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:08:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:08:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:08:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:08:06 compute-0 sudo[243583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:08:06 compute-0 sudo[243583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:08:06 compute-0 sudo[243583]: pam_unix(sudo:session): session closed for user root
Jan 21 14:08:07 compute-0 ceph-mon[75031]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:08:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:08:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:09 compute-0 ceph-mon[75031]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:08:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:08:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:08:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:08:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:08:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:08:11 compute-0 ceph-mon[75031]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:13 compute-0 ceph-mon[75031]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:15 compute-0 ceph-mon[75031]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:17 compute-0 ceph-mon[75031]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:19 compute-0 ceph-mon[75031]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:21 compute-0 ceph-mon[75031]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:08:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164572454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:08:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:08:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164572454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:08:23 compute-0 ceph-mon[75031]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4164572454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:08:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4164572454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:08:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:25 compute-0 podman[243609]: 2026-01-21 14:08:25.367083977 +0000 UTC m=+0.086466251 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:08:25 compute-0 podman[243608]: 2026-01-21 14:08:25.367981329 +0000 UTC m=+0.087371833 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 21 14:08:25 compute-0 ceph-mon[75031]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:27 compute-0 ceph-mon[75031]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:29 compute-0 ceph-mon[75031]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:31 compute-0 ceph-mon[75031]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:33 compute-0 ceph-mon[75031]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:08:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:08:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:08:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:08:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:08:33.899 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:08:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:35 compute-0 ceph-mon[75031]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:35 compute-0 nova_compute[239261]: 2026-01-21 14:08:35.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:35 compute-0 nova_compute[239261]: 2026-01-21 14:08:35.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 21 14:08:35 compute-0 nova_compute[239261]: 2026-01-21 14:08:35.745 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 21 14:08:35 compute-0 nova_compute[239261]: 2026-01-21 14:08:35.746 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:35 compute-0 nova_compute[239261]: 2026-01-21 14:08:35.747 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 21 14:08:35 compute-0 nova_compute[239261]: 2026-01-21 14:08:35.775 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:37 compute-0 ceph-mon[75031]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:38 compute-0 nova_compute[239261]: 2026-01-21 14:08:38.793 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:38 compute-0 nova_compute[239261]: 2026-01-21 14:08:38.794 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:08:38 compute-0 nova_compute[239261]: 2026-01-21 14:08:38.794 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:08:38 compute-0 nova_compute[239261]: 2026-01-21 14:08:38.996 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:08:38 compute-0 nova_compute[239261]: 2026-01-21 14:08:38.996 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:38 compute-0 nova_compute[239261]: 2026-01-21 14:08:38.997 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:08:39 compute-0 ceph-mon[75031]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:08:39
Jan 21 14:08:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:08:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:08:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 21 14:08:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:08:39 compute-0 nova_compute[239261]: 2026-01-21 14:08:39.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:39 compute-0 nova_compute[239261]: 2026-01-21 14:08:39.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:39 compute-0 nova_compute[239261]: 2026-01-21 14:08:39.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:40 compute-0 nova_compute[239261]: 2026-01-21 14:08:40.182 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:08:40 compute-0 nova_compute[239261]: 2026-01-21 14:08:40.184 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:08:40 compute-0 nova_compute[239261]: 2026-01-21 14:08:40.184 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:08:40 compute-0 nova_compute[239261]: 2026-01-21 14:08:40.184 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:08:40 compute-0 nova_compute[239261]: 2026-01-21 14:08:40.185 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:08:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:08:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867713952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:08:40 compute-0 nova_compute[239261]: 2026-01-21 14:08:40.757 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:08:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.002 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.003 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.004 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.004 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:08:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:08:41 compute-0 ceph-mon[75031]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2867713952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.621 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.622 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:08:41 compute-0 nova_compute[239261]: 2026-01-21 14:08:41.754 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.040 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.040 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.057 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.083 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.097 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:08:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:08:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151325852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.683 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:08:42 compute-0 nova_compute[239261]: 2026-01-21 14:08:42.691 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:08:43 compute-0 nova_compute[239261]: 2026-01-21 14:08:43.079 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:08:43 compute-0 nova_compute[239261]: 2026-01-21 14:08:43.082 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:08:43 compute-0 nova_compute[239261]: 2026-01-21 14:08:43.083 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:08:43 compute-0 ceph-mon[75031]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1151325852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:08:44 compute-0 nova_compute[239261]: 2026-01-21 14:08:44.083 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:44 compute-0 nova_compute[239261]: 2026-01-21 14:08:44.083 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:44 compute-0 nova_compute[239261]: 2026-01-21 14:08:44.084 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:44 compute-0 nova_compute[239261]: 2026-01-21 14:08:44.084 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:08:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:45 compute-0 ceph-mon[75031]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:47 compute-0 ceph-mon[75031]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:49 compute-0 ceph-mon[75031]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2753072983198444e-06 of space, bias 4.0, pg target 0.0015303687579838134 quantized to 16 (current 16)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:08:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:08:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:51 compute-0 ceph-mon[75031]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:53 compute-0 ceph-mon[75031]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:55 compute-0 ceph-mon[75031]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:08:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:56 compute-0 podman[243697]: 2026-01-21 14:08:56.358391757 +0000 UTC m=+0.074789718 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:08:56 compute-0 podman[243696]: 2026-01-21 14:08:56.412703064 +0000 UTC m=+0.127611098 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 14:08:57 compute-0 ceph-mon[75031]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:08:58 compute-0 ceph-mon[75031]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:01 compute-0 ceph-mon[75031]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:03 compute-0 ceph-mon[75031]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:05 compute-0 ceph-mon[75031]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:06 compute-0 sudo[243740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:09:06 compute-0 sudo[243740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:06 compute-0 sudo[243740]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:06 compute-0 sudo[243765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:09:06 compute-0 sudo[243765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:07 compute-0 sudo[243765]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:09:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:09:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:09:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:09:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:09:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:09:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:09:07 compute-0 sudo[243821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:09:07 compute-0 sudo[243821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:07 compute-0 sudo[243821]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:07 compute-0 sudo[243846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:09:07 compute-0 sudo[243846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.831331786 +0000 UTC m=+0.054928362 container create a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 14:09:07 compute-0 systemd[1]: Started libpod-conmon-a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a.scope.
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.807068751 +0000 UTC m=+0.030665307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:09:07 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.933526768 +0000 UTC m=+0.157123334 container init a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.944189084 +0000 UTC m=+0.167785670 container start a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.950293997 +0000 UTC m=+0.173890583 container attach a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 14:09:07 compute-0 naughty_boyd[243899]: 167 167
Jan 21 14:09:07 compute-0 systemd[1]: libpod-a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a.scope: Deactivated successfully.
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.953379664 +0000 UTC m=+0.176976210 container died a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 14:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-30fc648982a3f816e04d251bdf2a3d06111deabe1de657f6607af7fb3212e86b-merged.mount: Deactivated successfully.
Jan 21 14:09:07 compute-0 podman[243883]: 2026-01-21 14:09:07.995108486 +0000 UTC m=+0.218705032 container remove a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_boyd, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:09:08 compute-0 systemd[1]: libpod-conmon-a982eb43072c7ce05e1a39cdb08cd0c9846bfb0284012fc8504e6823219ca45a.scope: Deactivated successfully.
Jan 21 14:09:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:08 compute-0 podman[243923]: 2026-01-21 14:09:08.239935509 +0000 UTC m=+0.076247825 container create cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 14:09:08 compute-0 systemd[1]: Started libpod-conmon-cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae.scope.
Jan 21 14:09:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:09:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:09:08 compute-0 podman[243923]: 2026-01-21 14:09:08.211359346 +0000 UTC m=+0.047671712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:09:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:09:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:09:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:09:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:09:08 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:08 compute-0 podman[243923]: 2026-01-21 14:09:08.344801777 +0000 UTC m=+0.181114133 container init cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 14:09:08 compute-0 podman[243923]: 2026-01-21 14:09:08.357093634 +0000 UTC m=+0.193405950 container start cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:09:08 compute-0 podman[243923]: 2026-01-21 14:09:08.362162901 +0000 UTC m=+0.198475207 container attach cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 14:09:08 compute-0 goofy_kalam[243939]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:09:08 compute-0 goofy_kalam[243939]: --> All data devices are unavailable
Jan 21 14:09:08 compute-0 systemd[1]: libpod-cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae.scope: Deactivated successfully.
Jan 21 14:09:08 compute-0 podman[243923]: 2026-01-21 14:09:08.987624838 +0000 UTC m=+0.823937134 container died cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 14:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c29bdd247b500bbc2c537a82287a30b3e3d3cb9d48835c21dc7ee59cb879c745-merged.mount: Deactivated successfully.
Jan 21 14:09:09 compute-0 podman[243923]: 2026-01-21 14:09:09.040290984 +0000 UTC m=+0.876603270 container remove cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 14:09:09 compute-0 systemd[1]: libpod-conmon-cdf803866dfa3df2109d3076074cc22da4461e4f22c60c39f9100b509dd7a3ae.scope: Deactivated successfully.
Jan 21 14:09:09 compute-0 sudo[243846]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:09 compute-0 sudo[243972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:09:09 compute-0 sudo[243972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:09 compute-0 sudo[243972]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:09 compute-0 sudo[243997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:09:09 compute-0 sudo[243997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:09 compute-0 ceph-mon[75031]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.702927739 +0000 UTC m=+0.060715547 container create 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:09:09 compute-0 systemd[1]: Started libpod-conmon-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope.
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.674245583 +0000 UTC m=+0.032033431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:09:09 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.792845934 +0000 UTC m=+0.150633782 container init 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.803673675 +0000 UTC m=+0.161461473 container start 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.807400458 +0000 UTC m=+0.165188316 container attach 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 14:09:09 compute-0 hopeful_nash[244051]: 167 167
Jan 21 14:09:09 compute-0 systemd[1]: libpod-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope: Deactivated successfully.
Jan 21 14:09:09 compute-0 conmon[244051]: conmon 8e3f9ad598518adb572f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope/container/memory.events
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.813303775 +0000 UTC m=+0.171091593 container died 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-231c2bcf1798cfdac306f90548fc996355c94ac40489cc6d879ccac1c9171b07-merged.mount: Deactivated successfully.
Jan 21 14:09:09 compute-0 podman[244034]: 2026-01-21 14:09:09.863343565 +0000 UTC m=+0.221131363 container remove 8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:09:09 compute-0 systemd[1]: libpod-conmon-8e3f9ad598518adb572f8a44224629cb5cdc23b35e99f741122d82bb578408b8.scope: Deactivated successfully.
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.089206484 +0000 UTC m=+0.047378864 container create 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:09:10 compute-0 systemd[1]: Started libpod-conmon-01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2.scope.
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.068886227 +0000 UTC m=+0.027058637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:09:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.199754295 +0000 UTC m=+0.157926745 container init 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:09:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.211519188 +0000 UTC m=+0.169691578 container start 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.216278977 +0000 UTC m=+0.174451437 container attach 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:09:10 compute-0 cranky_mclean[244092]: {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:     "0": [
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:         {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "devices": [
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "/dev/loop3"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             ],
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_name": "ceph_lv0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_size": "21470642176",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "name": "ceph_lv0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "tags": {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cluster_name": "ceph",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.crush_device_class": "",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.encrypted": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.objectstore": "bluestore",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osd_id": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.type": "block",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.vdo": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.with_tpm": "0"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             },
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "type": "block",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "vg_name": "ceph_vg0"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:         }
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:     ],
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:     "1": [
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:         {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "devices": [
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "/dev/loop4"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             ],
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_name": "ceph_lv1",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_size": "21470642176",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "name": "ceph_lv1",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "tags": {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cluster_name": "ceph",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.crush_device_class": "",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.encrypted": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.objectstore": "bluestore",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osd_id": "1",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.type": "block",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.vdo": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.with_tpm": "0"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             },
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "type": "block",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "vg_name": "ceph_vg1"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:         }
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:     ],
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:     "2": [
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:         {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "devices": [
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "/dev/loop5"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             ],
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_name": "ceph_lv2",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_size": "21470642176",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "name": "ceph_lv2",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "tags": {
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.cluster_name": "ceph",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.crush_device_class": "",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.encrypted": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.objectstore": "bluestore",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osd_id": "2",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.type": "block",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.vdo": "0",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:                 "ceph.with_tpm": "0"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             },
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "type": "block",
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:             "vg_name": "ceph_vg2"
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:         }
Jan 21 14:09:10 compute-0 cranky_mclean[244092]:     ]
Jan 21 14:09:10 compute-0 cranky_mclean[244092]: }
Jan 21 14:09:10 compute-0 systemd[1]: libpod-01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2.scope: Deactivated successfully.
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.535248462 +0000 UTC m=+0.493420812 container died 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 14:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c4d759ac7386c9e5b3afe9527ef502ae0f59592e163cf811adbb7ddfa0d16e-merged.mount: Deactivated successfully.
Jan 21 14:09:10 compute-0 podman[244075]: 2026-01-21 14:09:10.590996793 +0000 UTC m=+0.549169153 container remove 01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 21 14:09:10 compute-0 systemd[1]: libpod-conmon-01fdc66d7c1abcf09402a7c049832c708217f83edbb9da3f0b95088eeef30cc2.scope: Deactivated successfully.
Jan 21 14:09:10 compute-0 sudo[243997]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:10 compute-0 sudo[244115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:09:10 compute-0 sudo[244115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:10 compute-0 sudo[244115]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:10 compute-0 sudo[244140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:09:10 compute-0 sudo[244140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:09:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:09:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:09:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:09:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:09:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.203152279 +0000 UTC m=+0.073232580 container create 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:09:11 compute-0 systemd[1]: Started libpod-conmon-8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8.scope.
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.175371255 +0000 UTC m=+0.045451606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:09:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.307708139 +0000 UTC m=+0.177788470 container init 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.32013978 +0000 UTC m=+0.190220071 container start 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.324590921 +0000 UTC m=+0.194671262 container attach 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 14:09:11 compute-0 optimistic_shtern[244195]: 167 167
Jan 21 14:09:11 compute-0 systemd[1]: libpod-8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8.scope: Deactivated successfully.
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.328618801 +0000 UTC m=+0.198699092 container died 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffefbb312087164ae4b78f19e93a18377011fc77853d0b0f869291ea3143688c-merged.mount: Deactivated successfully.
Jan 21 14:09:11 compute-0 podman[244178]: 2026-01-21 14:09:11.385540563 +0000 UTC m=+0.255620834 container remove 8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 14:09:11 compute-0 ceph-mon[75031]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:11 compute-0 systemd[1]: libpod-conmon-8200a58e432f2d5121f284596d449406d3cee79fdcfe82f8c5add81bab095fd8.scope: Deactivated successfully.
Jan 21 14:09:11 compute-0 podman[244217]: 2026-01-21 14:09:11.64172211 +0000 UTC m=+0.065930017 container create 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 14:09:11 compute-0 systemd[1]: Started libpod-conmon-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope.
Jan 21 14:09:11 compute-0 podman[244217]: 2026-01-21 14:09:11.614752426 +0000 UTC m=+0.038960393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:09:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:09:11 compute-0 podman[244217]: 2026-01-21 14:09:11.746475836 +0000 UTC m=+0.170683753 container init 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 14:09:11 compute-0 podman[244217]: 2026-01-21 14:09:11.758541766 +0000 UTC m=+0.182749673 container start 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:09:11 compute-0 podman[244217]: 2026-01-21 14:09:11.763737146 +0000 UTC m=+0.187945043 container attach 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 14:09:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:12 compute-0 lvm[244312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:09:12 compute-0 lvm[244312]: VG ceph_vg0 finished
Jan 21 14:09:12 compute-0 lvm[244313]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:09:12 compute-0 lvm[244313]: VG ceph_vg1 finished
Jan 21 14:09:12 compute-0 lvm[244315]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:09:12 compute-0 lvm[244315]: VG ceph_vg2 finished
Jan 21 14:09:12 compute-0 admiring_bohr[244233]: {}
Jan 21 14:09:12 compute-0 systemd[1]: libpod-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope: Deactivated successfully.
Jan 21 14:09:12 compute-0 systemd[1]: libpod-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope: Consumed 1.325s CPU time.
Jan 21 14:09:12 compute-0 podman[244217]: 2026-01-21 14:09:12.565719861 +0000 UTC m=+0.989927778 container died 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 14:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a713d54eac246b6500099e433a7a127a78fa8593bff8f6ceb02739a34c92252-merged.mount: Deactivated successfully.
Jan 21 14:09:12 compute-0 podman[244217]: 2026-01-21 14:09:12.63617818 +0000 UTC m=+1.060386077 container remove 91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bohr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:09:12 compute-0 systemd[1]: libpod-conmon-91a9a84c7c3632a9d574304b5a32bb5ff5466e86475a883beaa62ffe21c56e83.scope: Deactivated successfully.
Jan 21 14:09:12 compute-0 sudo[244140]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:09:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:09:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:09:12 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:09:12 compute-0 sudo[244328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:09:12 compute-0 sudo[244328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:09:12 compute-0 sudo[244328]: pam_unix(sudo:session): session closed for user root
Jan 21 14:09:13 compute-0 ceph-mon[75031]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:09:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:09:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:15 compute-0 ceph-mon[75031]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:17 compute-0 ceph-mon[75031]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:19 compute-0 ceph-mon[75031]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:21 compute-0 ceph-mon[75031]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:09:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483851383' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:09:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:09:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483851383' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:09:23 compute-0 ceph-mon[75031]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/483851383' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:09:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/483851383' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:09:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:25 compute-0 ceph-mon[75031]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:27 compute-0 ceph-mon[75031]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:27 compute-0 podman[244354]: 2026-01-21 14:09:27.357781292 +0000 UTC m=+0.074928002 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:09:27 compute-0 podman[244353]: 2026-01-21 14:09:27.394459194 +0000 UTC m=+0.115250245 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 14:09:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:29 compute-0 sshd-session[244400]: banner exchange: Connection from 128.203.202.236 port 46896: invalid format
Jan 21 14:09:29 compute-0 ceph-mon[75031]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:31 compute-0 ceph-mon[75031]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:33 compute-0 ceph-mon[75031]: pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:09:33.898 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:09:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:09:33.899 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:09:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:09:33.899 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:09:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:35 compute-0 ceph-mon[75031]: pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:37 compute-0 ceph-mon[75031]: pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:38 compute-0 nova_compute[239261]: 2026-01-21 14:09:38.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:38 compute-0 nova_compute[239261]: 2026-01-21 14:09:38.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:09:38 compute-0 nova_compute[239261]: 2026-01-21 14:09:38.727 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:09:38 compute-0 sshd-session[244398]: Connection closed by 128.203.202.236 port 46884 [preauth]
Jan 21 14:09:38 compute-0 nova_compute[239261]: 2026-01-21 14:09:38.875 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:09:39 compute-0 ceph-mon[75031]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:09:39
Jan 21 14:09:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:09:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:09:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 21 14:09:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:09:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:09:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.745 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.785 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.785 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.786 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.786 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:09:39 compute-0 nova_compute[239261]: 2026-01-21 14:09:39.786 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:09:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:09:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543998522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:09:40 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2543998522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.353 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.542 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.543 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5144MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.543 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.543 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.647 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.647 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:09:40 compute-0 nova_compute[239261]: 2026-01-21 14:09:40.664 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:09:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:09:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:09:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:09:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987041817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:09:41 compute-0 nova_compute[239261]: 2026-01-21 14:09:41.250 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:09:41 compute-0 nova_compute[239261]: 2026-01-21 14:09:41.259 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:09:41 compute-0 ceph-mon[75031]: pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3987041817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:09:41 compute-0 nova_compute[239261]: 2026-01-21 14:09:41.371 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:09:41 compute-0 nova_compute[239261]: 2026-01-21 14:09:41.375 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:09:41 compute-0 nova_compute[239261]: 2026-01-21 14:09:41.376 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:09:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.356 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.357 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.358 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.358 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:09:42 compute-0 nova_compute[239261]: 2026-01-21 14:09:42.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:09:43 compute-0 ceph-mon[75031]: pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:45 compute-0 ceph-mon[75031]: pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:47 compute-0 ceph-mon[75031]: pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 21 14:09:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 21 14:09:48 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 21 14:09:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 21 14:09:49 compute-0 ceph-mon[75031]: pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:09:49 compute-0 ceph-mon[75031]: osdmap e123: 3 total, 3 up, 3 in
Jan 21 14:09:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 21 14:09:49 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 895 B/s wr, 3 op/s
Jan 21 14:09:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 21 14:09:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 21 14:09:50 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 21 14:09:50 compute-0 ceph-mon[75031]: osdmap e124: 3 total, 3 up, 3 in
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2771392571877214e-06 of space, bias 4.0, pg target 0.0015325671086252656 quantized to 16 (current 16)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:09:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:09:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:51 compute-0 ceph-mon[75031]: pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 895 B/s wr, 3 op/s
Jan 21 14:09:51 compute-0 ceph-mon[75031]: osdmap e125: 3 total, 3 up, 3 in
Jan 21 14:09:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 21 14:09:53 compute-0 ceph-mon[75031]: pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 21 14:09:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 8.5 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Jan 21 14:09:55 compute-0 ceph-mon[75031]: pgmap v873: 305 pgs: 305 active+clean; 8.5 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Jan 21 14:09:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:09:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 21 14:09:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 21 14:09:57 compute-0 ceph-mon[75031]: pgmap v874: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Jan 21 14:09:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 21 14:09:57 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 21 14:09:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Jan 21 14:09:58 compute-0 podman[244447]: 2026-01-21 14:09:58.370235542 +0000 UTC m=+0.065970620 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 14:09:58 compute-0 podman[244446]: 2026-01-21 14:09:58.386004324 +0000 UTC m=+0.097005261 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:09:59 compute-0 ceph-mon[75031]: osdmap e126: 3 total, 3 up, 3 in
Jan 21 14:10:00 compute-0 ceph-mon[75031]: pgmap v876: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Jan 21 14:10:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.2 MiB/s wr, 36 op/s
Jan 21 14:10:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 21 14:10:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 21 14:10:00 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 21 14:10:01 compute-0 ceph-mon[75031]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.2 MiB/s wr, 36 op/s
Jan 21 14:10:01 compute-0 ceph-mon[75031]: osdmap e127: 3 total, 3 up, 3 in
Jan 21 14:10:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.1 MiB/s wr, 28 op/s
Jan 21 14:10:03 compute-0 ceph-mon[75031]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.1 MiB/s wr, 28 op/s
Jan 21 14:10:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 21 14:10:05 compute-0 ceph-mon[75031]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 21 14:10:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.4 MiB/s wr, 22 op/s
Jan 21 14:10:07 compute-0 ceph-mon[75031]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.4 MiB/s wr, 22 op/s
Jan 21 14:10:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.361248) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608361284, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2070, "num_deletes": 251, "total_data_size": 3521512, "memory_usage": 3584480, "flush_reason": "Manual Compaction"}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608388745, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3455214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16353, "largest_seqno": 18422, "table_properties": {"data_size": 3445733, "index_size": 6039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18706, "raw_average_key_size": 19, "raw_value_size": 3426841, "raw_average_value_size": 3649, "num_data_blocks": 272, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004382, "oldest_key_time": 1769004382, "file_creation_time": 1769004608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 27579 microseconds, and 9316 cpu microseconds.
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.388817) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3455214 bytes OK
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.388847) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.401007) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.401037) EVENT_LOG_v1 {"time_micros": 1769004608401029, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.401064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3512855, prev total WAL file size 3512855, number of live WAL files 2.
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.402730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3374KB)], [38(7779KB)]
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608402841, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11421474, "oldest_snapshot_seqno": -1}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4442 keys, 9612497 bytes, temperature: kUnknown
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608491736, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9612497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9579085, "index_size": 21206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107399, "raw_average_key_size": 24, "raw_value_size": 9495166, "raw_average_value_size": 2137, "num_data_blocks": 900, "num_entries": 4442, "num_filter_entries": 4442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.491996) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9612497 bytes
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.496162) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.3 rd, 108.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4960, records dropped: 518 output_compression: NoCompression
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.496207) EVENT_LOG_v1 {"time_micros": 1769004608496191, "job": 18, "event": "compaction_finished", "compaction_time_micros": 89003, "compaction_time_cpu_micros": 25280, "output_level": 6, "num_output_files": 1, "total_output_size": 9612497, "num_input_records": 4960, "num_output_records": 4442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608497152, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004608498976, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.402600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:08 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:08.499105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:09 compute-0 ceph-mon[75031]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 21 14:10:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:10:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:10:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:10:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:10:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:10:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:10:11 compute-0 ceph-mon[75031]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:12 compute-0 sudo[244492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:10:12 compute-0 sudo[244492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:12 compute-0 sudo[244492]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:12 compute-0 sudo[244517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:10:12 compute-0 sudo[244517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:13 compute-0 sudo[244517]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:10:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:10:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:10:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:10:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:10:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:10:13 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:10:13 compute-0 sudo[244574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:10:13 compute-0 sudo[244574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:13 compute-0 sudo[244574]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:13 compute-0 sudo[244599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:10:13 compute-0 sudo[244599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.04178078 +0000 UTC m=+0.046080756 container create 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:10:14 compute-0 systemd[1]: Started libpod-conmon-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope.
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.022719177 +0000 UTC m=+0.027019193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:10:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.138882753 +0000 UTC m=+0.143182769 container init 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.149530518 +0000 UTC m=+0.153830534 container start 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.153891926 +0000 UTC m=+0.158191932 container attach 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:10:14 compute-0 agitated_darwin[244653]: 167 167
Jan 21 14:10:14 compute-0 systemd[1]: libpod-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope: Deactivated successfully.
Jan 21 14:10:14 compute-0 conmon[244653]: conmon 35f70c8de86f962a3657 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope/container/memory.events
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.156267815 +0000 UTC m=+0.160567801 container died 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0616dbc626913fac192180f64b7db3bfa24cdddbae3b28b2cf60ba3494b67a0-merged.mount: Deactivated successfully.
Jan 21 14:10:14 compute-0 podman[244636]: 2026-01-21 14:10:14.199223812 +0000 UTC m=+0.203523798 container remove 35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:10:14 compute-0 systemd[1]: libpod-conmon-35f70c8de86f962a365776e4885179a0dd5f55d3ec42d1e9c17d35ac40446331.scope: Deactivated successfully.
Jan 21 14:10:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:14 compute-0 podman[244677]: 2026-01-21 14:10:14.365257838 +0000 UTC m=+0.040379894 container create 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:10:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:10:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:10:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:10:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:10:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:10:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:10:14 compute-0 systemd[1]: Started libpod-conmon-38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78.scope.
Jan 21 14:10:14 compute-0 podman[244677]: 2026-01-21 14:10:14.346308377 +0000 UTC m=+0.021430463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:10:14 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:14 compute-0 podman[244677]: 2026-01-21 14:10:14.463192341 +0000 UTC m=+0.138314487 container init 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:10:14 compute-0 podman[244677]: 2026-01-21 14:10:14.472631176 +0000 UTC m=+0.147753232 container start 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:10:14 compute-0 podman[244677]: 2026-01-21 14:10:14.476631225 +0000 UTC m=+0.151753281 container attach 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 14:10:14 compute-0 angry_ride[244693]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:10:14 compute-0 angry_ride[244693]: --> All data devices are unavailable
Jan 21 14:10:14 compute-0 systemd[1]: libpod-38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78.scope: Deactivated successfully.
Jan 21 14:10:14 compute-0 podman[244677]: 2026-01-21 14:10:14.994194695 +0000 UTC m=+0.669316771 container died 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-94716a224cb5bb3271c73e4453feb18c5cfed7956237fe74f51f59394379d917-merged.mount: Deactivated successfully.
Jan 21 14:10:15 compute-0 podman[244677]: 2026-01-21 14:10:15.044108216 +0000 UTC m=+0.719230302 container remove 38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:10:15 compute-0 systemd[1]: libpod-conmon-38c7cd630aab0bd4cbbb07132bfd62b4bea3db7c0bdf0e697c6b6019283c6f78.scope: Deactivated successfully.
Jan 21 14:10:15 compute-0 sudo[244599]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:15 compute-0 sudo[244722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:10:15 compute-0 sudo[244722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:15 compute-0 sudo[244722]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:15 compute-0 sudo[244747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:10:15 compute-0 sudo[244747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.540598451 +0000 UTC m=+0.037604645 container create d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 14:10:15 compute-0 systemd[1]: Started libpod-conmon-d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d.scope.
Jan 21 14:10:15 compute-0 ceph-mon[75031]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.618654701 +0000 UTC m=+0.115660905 container init d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.523787514 +0000 UTC m=+0.020793728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.625004129 +0000 UTC m=+0.122010313 container start d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.630600818 +0000 UTC m=+0.127607022 container attach d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 14:10:15 compute-0 friendly_pascal[244799]: 167 167
Jan 21 14:10:15 compute-0 systemd[1]: libpod-d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d.scope: Deactivated successfully.
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.632888025 +0000 UTC m=+0.129894229 container died d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-72b8cc38e309d85cec04e63b1b6fb30ec249d7d07be05510c5778239ac5c2658-merged.mount: Deactivated successfully.
Jan 21 14:10:15 compute-0 podman[244783]: 2026-01-21 14:10:15.675168005 +0000 UTC m=+0.172174229 container remove d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:10:15 compute-0 systemd[1]: libpod-conmon-d606035071c420cc18f55fb8c54c6d6d6cad692ce8f610c29fea7e82d7b2453d.scope: Deactivated successfully.
Jan 21 14:10:15 compute-0 podman[244823]: 2026-01-21 14:10:15.835310605 +0000 UTC m=+0.029161976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:10:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:16 compute-0 podman[244823]: 2026-01-21 14:10:16.284142507 +0000 UTC m=+0.477993858 container create a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 14:10:16 compute-0 systemd[1]: Started libpod-conmon-a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc.scope.
Jan 21 14:10:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:16 compute-0 podman[244823]: 2026-01-21 14:10:16.39091066 +0000 UTC m=+0.584762011 container init a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 21 14:10:16 compute-0 podman[244823]: 2026-01-21 14:10:16.3997659 +0000 UTC m=+0.593617261 container start a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 14:10:16 compute-0 podman[244823]: 2026-01-21 14:10:16.40380573 +0000 UTC m=+0.597657101 container attach a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 14:10:16 compute-0 determined_meninsky[244840]: {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:     "0": [
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:         {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "devices": [
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "/dev/loop3"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             ],
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_name": "ceph_lv0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_size": "21470642176",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "name": "ceph_lv0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "tags": {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cluster_name": "ceph",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.crush_device_class": "",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.encrypted": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.objectstore": "bluestore",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osd_id": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.type": "block",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.vdo": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.with_tpm": "0"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             },
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "type": "block",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "vg_name": "ceph_vg0"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:         }
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:     ],
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:     "1": [
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:         {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "devices": [
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "/dev/loop4"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             ],
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_name": "ceph_lv1",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_size": "21470642176",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "name": "ceph_lv1",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "tags": {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cluster_name": "ceph",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.crush_device_class": "",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.encrypted": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.objectstore": "bluestore",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osd_id": "1",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.type": "block",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.vdo": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.with_tpm": "0"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             },
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "type": "block",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "vg_name": "ceph_vg1"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:         }
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:     ],
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:     "2": [
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:         {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "devices": [
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "/dev/loop5"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             ],
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_name": "ceph_lv2",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_size": "21470642176",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "name": "ceph_lv2",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "tags": {
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.cluster_name": "ceph",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.crush_device_class": "",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.encrypted": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.objectstore": "bluestore",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osd_id": "2",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.type": "block",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.vdo": "0",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:                 "ceph.with_tpm": "0"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             },
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "type": "block",
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:             "vg_name": "ceph_vg2"
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:         }
Jan 21 14:10:16 compute-0 determined_meninsky[244840]:     ]
Jan 21 14:10:16 compute-0 determined_meninsky[244840]: }
Jan 21 14:10:16 compute-0 systemd[1]: libpod-a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc.scope: Deactivated successfully.
Jan 21 14:10:16 compute-0 podman[244823]: 2026-01-21 14:10:16.723659018 +0000 UTC m=+0.917510359 container died a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 14:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6704a7736c1b5c692e5bdbc6df088ff32d346c4a76941209b3b5a917e13c2014-merged.mount: Deactivated successfully.
Jan 21 14:10:16 compute-0 podman[244823]: 2026-01-21 14:10:16.768286517 +0000 UTC m=+0.962137868 container remove a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 21 14:10:16 compute-0 systemd[1]: libpod-conmon-a82ad59f4cb84a1a384ed95bcd1d3655fe30679f2aea516326ae4ab3d88681cc.scope: Deactivated successfully.
Jan 21 14:10:16 compute-0 sudo[244747]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:16 compute-0 sudo[244861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:10:16 compute-0 sudo[244861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:16 compute-0 sudo[244861]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:16 compute-0 sudo[244886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:10:16 compute-0 sudo[244886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.310797017 +0000 UTC m=+0.098537210 container create d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.239879425 +0000 UTC m=+0.027619598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:10:17 compute-0 systemd[1]: Started libpod-conmon-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope.
Jan 21 14:10:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.411338965 +0000 UTC m=+0.199079138 container init d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.418843092 +0000 UTC m=+0.206583275 container start d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.423676281 +0000 UTC m=+0.211416464 container attach d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:10:17 compute-0 systemd[1]: libpod-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope: Deactivated successfully.
Jan 21 14:10:17 compute-0 exciting_swartz[244939]: 167 167
Jan 21 14:10:17 compute-0 conmon[244939]: conmon d7d1d17acaa9fc725b86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope/container/memory.events
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.428496132 +0000 UTC m=+0.216236285 container died d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ea5bd8f2fb6a9ddfbbb697a061061482fd9c193b6809b4a05a97fadad39a83e-merged.mount: Deactivated successfully.
Jan 21 14:10:17 compute-0 podman[244923]: 2026-01-21 14:10:17.470722861 +0000 UTC m=+0.258463014 container remove d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:10:17 compute-0 systemd[1]: libpod-conmon-d7d1d17acaa9fc725b863a7689c0cc9c05488566c814f91753c47bff6070639a.scope: Deactivated successfully.
Jan 21 14:10:17 compute-0 ceph-mon[75031]: pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:17 compute-0 podman[244961]: 2026-01-21 14:10:17.663059849 +0000 UTC m=+0.044237379 container create a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:10:17 compute-0 systemd[1]: Started libpod-conmon-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope.
Jan 21 14:10:17 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:10:17 compute-0 podman[244961]: 2026-01-21 14:10:17.643178306 +0000 UTC m=+0.024355886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:10:17 compute-0 podman[244961]: 2026-01-21 14:10:17.749878857 +0000 UTC m=+0.131056467 container init a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 21 14:10:17 compute-0 podman[244961]: 2026-01-21 14:10:17.762050619 +0000 UTC m=+0.143228169 container start a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:10:17 compute-0 podman[244961]: 2026-01-21 14:10:17.766370987 +0000 UTC m=+0.147548557 container attach a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:10:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:18 compute-0 lvm[245056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:10:18 compute-0 lvm[245056]: VG ceph_vg0 finished
Jan 21 14:10:18 compute-0 lvm[245057]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:10:18 compute-0 lvm[245057]: VG ceph_vg1 finished
Jan 21 14:10:18 compute-0 lvm[245059]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:10:18 compute-0 lvm[245059]: VG ceph_vg2 finished
Jan 21 14:10:18 compute-0 epic_zhukovsky[244978]: {}
Jan 21 14:10:18 compute-0 systemd[1]: libpod-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope: Deactivated successfully.
Jan 21 14:10:18 compute-0 systemd[1]: libpod-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope: Consumed 1.212s CPU time.
Jan 21 14:10:18 compute-0 podman[244961]: 2026-01-21 14:10:18.523985031 +0000 UTC m=+0.905162611 container died a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 14:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bde2b9a76a57d956be36d55107cb3aacafd799cff3d973f653180bacc106331-merged.mount: Deactivated successfully.
Jan 21 14:10:18 compute-0 podman[244961]: 2026-01-21 14:10:18.655670513 +0000 UTC m=+1.036848033 container remove a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:10:18 compute-0 systemd[1]: libpod-conmon-a7322734fac9584d9ea7c5e1088c9a76db1df856158f4fb3e0249e9ba2588d61.scope: Deactivated successfully.
Jan 21 14:10:18 compute-0 sudo[244886]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:10:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:10:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:10:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:10:18 compute-0 sudo[245076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:10:18 compute-0 sudo[245076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:10:18 compute-0 sudo[245076]: pam_unix(sudo:session): session closed for user root
Jan 21 14:10:19 compute-0 ceph-mon[75031]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:10:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:10:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:10:21 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:21.541+0000 7fc516655640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:22 compute-0 ceph-mon[75031]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/d2577f41-d908-4371-8c43-e8fbe046d39f'.
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "format": "json"}]: dispatch
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:10:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:10:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:10:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2631761722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:10:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:10:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2631761722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:10:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:23 compute-0 ceph-mon[75031]: pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:10:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "format": "json"}]: dispatch
Jan 21 14:10:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2631761722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:10:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2631761722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:10:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.tnwklj(active, since 25m)
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/a6452fa6-7ff6-41a5-b0cb-e0c7da2f4521'.
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/.meta.tmp'
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/.meta.tmp' to config b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219/.meta'
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "format": "json"}]: dispatch
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 14:10:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 14:10:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/205f3a51-88be-4bba-be8f-7be277cabc08'.
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/.meta.tmp'
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/.meta.tmp' to config b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de/.meta'
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "format": "json"}]: dispatch
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 14:10:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:24 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Jan 21 14:10:24 compute-0 ceph-mon[75031]: mgrmap e12: compute-0.tnwklj(active, since 25m)
Jan 21 14:10:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "format": "json"}]: dispatch
Jan 21 14:10:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "format": "json"}]: dispatch
Jan 21 14:10:25 compute-0 ceph-mon[75031]: pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/9de7cc8f-afb8-49c1-8ccf-2bc90c8f924e'.
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/.meta.tmp'
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/.meta.tmp' to config b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409/.meta'
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "format": "json"}]: dispatch
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:25 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 14:10:25 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:26 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:10:26.045 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:10:26 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:10:26.046 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:10:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s wr, 2 op/s
Jan 21 14:10:26 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:26 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "format": "json"}]: dispatch
Jan 21 14:10:26 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 14:10:27 compute-0 ceph-mon[75031]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s wr, 2 op/s
Jan 21 14:10:28 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:10:28.047 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s wr, 2 op/s
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "format": "json"}]: dispatch
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca297459-dcdc-48cc-b973-0a2fd8a93409' of type subvolume
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.599+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ca297459-dcdc-48cc-b973-0a2fd8a93409' of type subvolume
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ca297459-dcdc-48cc-b973-0a2fd8a93409'' moved to trashcan
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ca297459-dcdc-48cc-b973-0a2fd8a93409, vol_name:cephfs) < ""
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.618+0000 7fc51965b640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:28.660+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:10:29 compute-0 podman[245141]: 2026-01-21 14:10:29.360211543 +0000 UTC m=+0.078059629 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 21 14:10:29 compute-0 podman[245140]: 2026-01-21 14:10:29.398039823 +0000 UTC m=+0.119314725 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 14:10:29 compute-0 ceph-mon[75031]: pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s wr, 2 op/s
Jan 21 14:10:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "format": "json"}]: dispatch
Jan 21 14:10:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca297459-dcdc-48cc-b973-0a2fd8a93409", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 14:10:30 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.tnwklj(active, since 25m)
Jan 21 14:10:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "format": "json"}]: dispatch
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec4e87bc-026b-4a6f-938e-c32b3b1010de' of type subvolume
Jan 21 14:10:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:31.467+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec4e87bc-026b-4a6f-938e-c32b3b1010de' of type subvolume
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec4e87bc-026b-4a6f-938e-c32b3b1010de'' moved to trashcan
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:10:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec4e87bc-026b-4a6f-938e-c32b3b1010de, vol_name:cephfs) < ""
Jan 21 14:10:31 compute-0 ceph-mon[75031]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 14:10:31 compute-0 ceph-mon[75031]: mgrmap e13: compute-0.tnwklj(active, since 25m)
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/ca18ae93-6039-44b0-aed6-bffe7b551018'.
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/.meta.tmp'
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/.meta.tmp' to config b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94/.meta'
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "format": "json"}]: dispatch
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:32 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "format": "json"}]: dispatch
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1e76c5b-dd9f-45f4-b2d2-e22465776219' of type subvolume
Jan 21 14:10:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:33.014+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1e76c5b-dd9f-45f4-b2d2-e22465776219' of type subvolume
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f1e76c5b-dd9f-45f4-b2d2-e22465776219'' moved to trashcan
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:10:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1e76c5b-dd9f-45f4-b2d2-e22465776219, vol_name:cephfs) < ""
Jan 21 14:10:33 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "format": "json"}]: dispatch
Jan 21 14:10:33 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec4e87bc-026b-4a6f-938e-c32b3b1010de", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:33 compute-0 ceph-mon[75031]: pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 14:10:33 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:33 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:10:33.900 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:10:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:10:33.900 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:10:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:10:33.900 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:10:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 14:10:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "format": "json"}]: dispatch
Jan 21 14:10:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "format": "json"}]: dispatch
Jan 21 14:10:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f1e76c5b-dd9f-45f4-b2d2-e22465776219", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:35 compute-0 ceph-mon[75031]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 3 op/s
Jan 21 14:10:35 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 6 op/s
Jan 21 14:10:37 compute-0 ceph-mon[75031]: pgmap v896: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 6 op/s
Jan 21 14:10:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 14:10:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 14:10:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 5 op/s
Jan 21 14:10:38 compute-0 nova_compute[239261]: 2026-01-21 14:10:38.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:38 compute-0 nova_compute[239261]: 2026-01-21 14:10:38.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:10:38 compute-0 nova_compute[239261]: 2026-01-21 14:10:38.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:10:38 compute-0 nova_compute[239261]: 2026-01-21 14:10:38.780 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "format": "json"}]: dispatch
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a16a328-6a6b-4997-8d01-233d8aaecf94' of type subvolume
Jan 21 14:10:39 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:39.068+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a16a328-6a6b-4997-8d01-233d8aaecf94' of type subvolume
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0a16a328-6a6b-4997-8d01-233d8aaecf94'' moved to trashcan
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a16a328-6a6b-4997-8d01-233d8aaecf94, vol_name:cephfs) < ""
Jan 21 14:10:39 compute-0 ceph-mon[75031]: pgmap v897: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 5 op/s
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:10:39
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 21 14:10:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:10:39 compute-0 nova_compute[239261]: 2026-01-21 14:10:39.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:39 compute-0 nova_compute[239261]: 2026-01-21 14:10:39.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:10:39 compute-0 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:10:39 compute-0 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:10:39 compute-0 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:10:39 compute-0 nova_compute[239261]: 2026-01-21 14:10:39.755 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:10:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "format": "json"}]: dispatch
Jan 21 14:10:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0a16a328-6a6b-4997-8d01-233d8aaecf94", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 6 op/s
Jan 21 14:10:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:10:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204784224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.300 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.477 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.478 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5162MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.479 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.479 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.543 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.543 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:10:40 compute-0 nova_compute[239261]: 2026-01-21 14:10:40.564 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:10:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.968075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004640968101, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 585, "num_deletes": 251, "total_data_size": 740232, "memory_usage": 750272, "flush_reason": "Manual Compaction"}
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004640974286, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 614936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18423, "largest_seqno": 19007, "table_properties": {"data_size": 611920, "index_size": 924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8035, "raw_average_key_size": 20, "raw_value_size": 605592, "raw_average_value_size": 1529, "num_data_blocks": 41, "num_entries": 396, "num_filter_entries": 396, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004609, "oldest_key_time": 1769004609, "file_creation_time": 1769004640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 6285 microseconds, and 2918 cpu microseconds.
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.974351) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 614936 bytes OK
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.974378) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.975911) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.975934) EVENT_LOG_v1 {"time_micros": 1769004640975927, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.975955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 736918, prev total WAL file size 736918, number of live WAL files 2.
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.976503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(600KB)], [41(9387KB)]
Jan 21 14:10:40 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004640976539, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10227433, "oldest_snapshot_seqno": -1}
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4330 keys, 6992315 bytes, temperature: kUnknown
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004641032799, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6992315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6963778, "index_size": 16587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105599, "raw_average_key_size": 24, "raw_value_size": 6885917, "raw_average_value_size": 1590, "num_data_blocks": 698, "num_entries": 4330, "num_filter_entries": 4330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.033052) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6992315 bytes
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.036020) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 124.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(28.0) write-amplify(11.4) OK, records in: 4838, records dropped: 508 output_compression: NoCompression
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.036054) EVENT_LOG_v1 {"time_micros": 1769004641036037, "job": 20, "event": "compaction_finished", "compaction_time_micros": 56345, "compaction_time_cpu_micros": 20322, "output_level": 6, "num_output_files": 1, "total_output_size": 6992315, "num_input_records": 4838, "num_output_records": 4330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004641036317, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004641038281, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:40.976404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:41 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:10:41.038489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:10:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:10:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187461756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:10:41 compute-0 nova_compute[239261]: 2026-01-21 14:10:41.091 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:10:41 compute-0 nova_compute[239261]: 2026-01-21 14:10:41.096 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:10:41 compute-0 nova_compute[239261]: 2026-01-21 14:10:41.124 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:10:41 compute-0 nova_compute[239261]: 2026-01-21 14:10:41.125 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:10:41 compute-0 nova_compute[239261]: 2026-01-21 14:10:41.125 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:10:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:10:41 compute-0 ceph-mon[75031]: pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 6 op/s
Jan 21 14:10:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3204784224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:10:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2187461756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:10:42 compute-0 nova_compute[239261]: 2026-01-21 14:10:42.126 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:42 compute-0 nova_compute[239261]: 2026-01-21 14:10:42.127 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:42 compute-0 nova_compute[239261]: 2026-01-21 14:10:42.127 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:42 compute-0 nova_compute[239261]: 2026-01-21 14:10:42.127 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:10:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 10 KiB/s wr, 4 op/s
Jan 21 14:10:42 compute-0 nova_compute[239261]: 2026-01-21 14:10:42.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:42 compute-0 nova_compute[239261]: 2026-01-21 14:10:42.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/8b4e0f9c-cd5e-4a1d-b5b4-0c646ea195b3'.
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/.meta.tmp'
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/.meta.tmp' to config b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5/.meta'
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "format": "json"}]: dispatch
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 14:10:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 14:10:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:43 compute-0 ceph-mon[75031]: pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 10 KiB/s wr, 4 op/s
Jan 21 14:10:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:43 compute-0 nova_compute[239261]: 2026-01-21 14:10:43.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:43 compute-0 nova_compute[239261]: 2026-01-21 14:10:43.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:10:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 10 KiB/s wr, 5 op/s
Jan 21 14:10:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "format": "json"}]: dispatch
Jan 21 14:10:45 compute-0 ceph-mon[75031]: pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 10 KiB/s wr, 5 op/s
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "format": "json"}]: dispatch
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09f7a444-a5d6-4cd1-8195-bcb6a300bcd5' of type subvolume
Jan 21 14:10:45 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:45.813+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09f7a444-a5d6-4cd1-8195-bcb6a300bcd5' of type subvolume
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09f7a444-a5d6-4cd1-8195-bcb6a300bcd5'' moved to trashcan
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:10:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09f7a444-a5d6-4cd1-8195-bcb6a300bcd5, vol_name:cephfs) < ""
Jan 21 14:10:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 14 KiB/s wr, 7 op/s
Jan 21 14:10:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "format": "json"}]: dispatch
Jan 21 14:10:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09f7a444-a5d6-4cd1-8195-bcb6a300bcd5", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:47 compute-0 ceph-mon[75031]: pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 14 KiB/s wr, 7 op/s
Jan 21 14:10:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.4 KiB/s wr, 3 op/s
Jan 21 14:10:49 compute-0 ceph-mon[75031]: pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.4 KiB/s wr, 3 op/s
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 5 op/s
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662231874301377 of space, bias 1.0, pg target 0.1998669562290413 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.852332329108876e-06 of space, bias 4.0, pg target 0.007022798794930651 quantized to 16 (current 16)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:10:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:10:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:51 compute-0 ceph-mon[75031]: pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 5 op/s
Jan 21 14:10:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.7 KiB/s wr, 4 op/s
Jan 21 14:10:53 compute-0 ceph-mon[75031]: pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.7 KiB/s wr, 4 op/s
Jan 21 14:10:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.7 KiB/s wr, 4 op/s
Jan 21 14:10:55 compute-0 ceph-mon[75031]: pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.7 KiB/s wr, 4 op/s
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/801344bb-1db0-4dbb-90a5-ccedbd38215f'.
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/.meta.tmp'
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/.meta.tmp' to config b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5/.meta'
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "format": "json"}]: dispatch
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 14:10:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 14:10:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:10:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:10:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.6 KiB/s wr, 4 op/s
Jan 21 14:10:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:10:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "format": "json"}]: dispatch
Jan 21 14:10:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:10:57 compute-0 ceph-mon[75031]: pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.6 KiB/s wr, 4 op/s
Jan 21 14:10:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 4.0 KiB/s wr, 2 op/s
Jan 21 14:10:59 compute-0 ceph-mon[75031]: pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 4.0 KiB/s wr, 2 op/s
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "format": "json"}]: dispatch
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd765fd4c-f99f-46af-bd07-596dac7c37d5' of type subvolume
Jan 21 14:10:59 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:10:59.793+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd765fd4c-f99f-46af-bd07-596dac7c37d5' of type subvolume
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "force": true, "format": "json"}]: dispatch
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d765fd4c-f99f-46af-bd07-596dac7c37d5'' moved to trashcan
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:10:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d765fd4c-f99f-46af-bd07-596dac7c37d5, vol_name:cephfs) < ""
Jan 21 14:11:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 6.5 KiB/s wr, 3 op/s
Jan 21 14:11:00 compute-0 podman[245231]: 2026-01-21 14:11:00.345422885 +0000 UTC m=+0.054000454 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 14:11:00 compute-0 podman[245230]: 2026-01-21 14:11:00.367810121 +0000 UTC m=+0.088278336 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:11:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "format": "json"}]: dispatch
Jan 21 14:11:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d765fd4c-f99f-46af-bd07-596dac7c37d5", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:01 compute-0 ceph-mon[75031]: pgmap v908: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 6.5 KiB/s wr, 3 op/s
Jan 21 14:11:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 1 op/s
Jan 21 14:11:03 compute-0 ceph-mon[75031]: pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 1 op/s
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/deefac8d-d835-46fb-b96e-2a3f5c2af6a5'.
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/.meta.tmp'
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/.meta.tmp' to config b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2/.meta'
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "format": "json"}]: dispatch
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 14:11:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 14:11:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:11:03 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054", "format": "json"}]: dispatch
Jan 21 14:11:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "format": "json"}]: dispatch
Jan 21 14:11:04 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 1 op/s
Jan 21 14:11:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054", "format": "json"}]: dispatch
Jan 21 14:11:05 compute-0 ceph-mon[75031]: pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 1 op/s
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:079a2c23-2c42-4cc1-a9d6-b5424fcac054, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054_2fe464ee-20eb-425c-b4bb-d3f446c877cd", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "079a2c23-2c42-4cc1-a9d6-b5424fcac054", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 9.5 KiB/s wr, 4 op/s
Jan 21 14:11:07 compute-0 ceph-mon[75031]: pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 9.5 KiB/s wr, 4 op/s
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/0db648a5-66e9-45ca-ac1f-bc80d2193114'.
Jan 21 14:11:07 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/.meta.tmp'
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/.meta.tmp' to config b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f/.meta'
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "format": "json"}]: dispatch
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 14:11:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 14:11:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:11:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:08 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.5 KiB/s wr, 3 op/s
Jan 21 14:11:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "format": "json"}]: dispatch
Jan 21 14:11:09 compute-0 ceph-mon[75031]: pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.5 KiB/s wr, 3 op/s
Jan 21 14:11:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 6 op/s
Jan 21 14:11:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:11:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:11:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:11:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:11:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:11:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:11:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:11 compute-0 ceph-mon[75031]: pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 6 op/s
Jan 21 14:11:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 13 KiB/s wr, 5 op/s
Jan 21 14:11:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 21 14:11:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 21 14:11:12 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "format": "json"}]: dispatch
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18603bd9-4e2c-4abb-ab1b-01752b8839c2' of type subvolume
Jan 21 14:11:13 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:11:13.261+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18603bd9-4e2c-4abb-ab1b-01752b8839c2' of type subvolume
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18603bd9-4e2c-4abb-ab1b-01752b8839c2'' moved to trashcan
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:11:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18603bd9-4e2c-4abb-ab1b-01752b8839c2, vol_name:cephfs) < ""
Jan 21 14:11:13 compute-0 ceph-mon[75031]: pgmap v914: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 13 KiB/s wr, 5 op/s
Jan 21 14:11:13 compute-0 ceph-mon[75031]: osdmap e128: 3 total, 3 up, 3 in
Jan 21 14:11:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 15 KiB/s wr, 6 op/s
Jan 21 14:11:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "format": "json"}]: dispatch
Jan 21 14:11:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18603bd9-4e2c-4abb-ab1b-01752b8839c2", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:15 compute-0 ceph-mon[75031]: pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 15 KiB/s wr, 6 op/s
Jan 21 14:11:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 4 op/s
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "format": "json"}]: dispatch
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:73408dc6-4c0b-4079-a270-af31e9a2608f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:73408dc6-4c0b-4079-a270-af31e9a2608f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73408dc6-4c0b-4079-a270-af31e9a2608f' of type subvolume
Jan 21 14:11:16 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:11:16.803+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '73408dc6-4c0b-4079-a270-af31e9a2608f' of type subvolume
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/73408dc6-4c0b-4079-a270-af31e9a2608f'' moved to trashcan
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:11:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:73408dc6-4c0b-4079-a270-af31e9a2608f, vol_name:cephfs) < ""
Jan 21 14:11:17 compute-0 ceph-mon[75031]: pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 4 op/s
Jan 21 14:11:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "format": "json"}]: dispatch
Jan 21 14:11:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "73408dc6-4c0b-4079-a270-af31e9a2608f", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 4 op/s
Jan 21 14:11:18 compute-0 sudo[245275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:11:18 compute-0 sudo[245275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:18 compute-0 sudo[245275]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:18 compute-0 sudo[245300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:11:18 compute-0 sudo[245300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:19 compute-0 sudo[245300]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:19 compute-0 ceph-mon[75031]: pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 4 op/s
Jan 21 14:11:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:11:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:11:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:11:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:11:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:11:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:11:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:11:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:11:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:11:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:11:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:11:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:11:19 compute-0 sudo[245358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:11:19 compute-0 sudo[245358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:19 compute-0 sudo[245358]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:19 compute-0 sudo[245383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:11:19 compute-0 sudo[245383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.032216759 +0000 UTC m=+0.056367171 container create b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:11:20 compute-0 systemd[1]: Started libpod-conmon-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope.
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.012197772 +0000 UTC m=+0.036348194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:11:20 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.125154718 +0000 UTC m=+0.149305140 container init b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.13287383 +0000 UTC m=+0.157024232 container start b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.13646356 +0000 UTC m=+0.160613992 container attach b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 14:11:20 compute-0 systemd[1]: libpod-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope: Deactivated successfully.
Jan 21 14:11:20 compute-0 funny_swanson[245436]: 167 167
Jan 21 14:11:20 compute-0 conmon[245436]: conmon b9c15494e112e2cac944 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope/container/memory.events
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.140407187 +0000 UTC m=+0.164557589 container died b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6af6ea8dbb83ba104b7972947f74aa9133e8709acbae1470ceabd98fd5a36871-merged.mount: Deactivated successfully.
Jan 21 14:11:20 compute-0 podman[245420]: 2026-01-21 14:11:20.188084751 +0000 UTC m=+0.212235163 container remove b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:11:20 compute-0 systemd[1]: libpod-conmon-b9c15494e112e2cac94401cbb389f0f75ce0ca927802b9c36ec2687e543274e6.scope: Deactivated successfully.
Jan 21 14:11:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:20 compute-0 podman[245460]: 2026-01-21 14:11:20.36468245 +0000 UTC m=+0.048275431 container create 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:11:20 compute-0 systemd[1]: Started libpod-conmon-3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e.scope.
Jan 21 14:11:20 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:11:20 compute-0 podman[245460]: 2026-01-21 14:11:20.343871183 +0000 UTC m=+0.027464184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:20 compute-0 podman[245460]: 2026-01-21 14:11:20.468377187 +0000 UTC m=+0.151970198 container init 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:11:20 compute-0 podman[245460]: 2026-01-21 14:11:20.479358759 +0000 UTC m=+0.162951750 container start 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 21 14:11:20 compute-0 podman[245460]: 2026-01-21 14:11:20.483380939 +0000 UTC m=+0.166973930 container attach 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 14:11:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:11:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:11:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:11:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:11:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:11:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:11:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0", "format": "json"}]: dispatch
Jan 21 14:11:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:21 compute-0 amazing_pike[245476]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:11:21 compute-0 amazing_pike[245476]: --> All data devices are unavailable
Jan 21 14:11:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 21 14:11:21 compute-0 systemd[1]: libpod-3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e.scope: Deactivated successfully.
Jan 21 14:11:21 compute-0 podman[245460]: 2026-01-21 14:11:21.050897631 +0000 UTC m=+0.734490612 container died 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:11:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 21 14:11:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.378352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681378382, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 664, "num_deletes": 257, "total_data_size": 676140, "memory_usage": 689816, "flush_reason": "Manual Compaction"}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681386839, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 669361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19008, "largest_seqno": 19671, "table_properties": {"data_size": 665964, "index_size": 1241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7969, "raw_average_key_size": 18, "raw_value_size": 658812, "raw_average_value_size": 1521, "num_data_blocks": 57, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004641, "oldest_key_time": 1769004641, "file_creation_time": 1769004681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8560 microseconds, and 3583 cpu microseconds.
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.386900) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 669361 bytes OK
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.386936) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.389876) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.389894) EVENT_LOG_v1 {"time_micros": 1769004681389889, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.389911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 672555, prev total WAL file size 672555, number of live WAL files 2.
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.390307) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(653KB)], [44(6828KB)]
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681390381, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7661676, "oldest_snapshot_seqno": -1}
Jan 21 14:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cda5f426816d8018ccb7fba65a30bb637d5fdfdd78538b8dfa80d1466a39d84-merged.mount: Deactivated successfully.
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4236 keys, 7542305 bytes, temperature: kUnknown
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681474266, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7542305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7513289, "index_size": 17322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 104928, "raw_average_key_size": 24, "raw_value_size": 7435934, "raw_average_value_size": 1755, "num_data_blocks": 726, "num_entries": 4236, "num_filter_entries": 4236, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.475117) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7542305 bytes
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.476654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 89.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(22.7) write-amplify(11.3) OK, records in: 4763, records dropped: 527 output_compression: NoCompression
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.476684) EVENT_LOG_v1 {"time_micros": 1769004681476668, "job": 22, "event": "compaction_finished", "compaction_time_micros": 84567, "compaction_time_cpu_micros": 30170, "output_level": 6, "num_output_files": 1, "total_output_size": 7542305, "num_input_records": 4763, "num_output_records": 4236, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681477194, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004681479092, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.390215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:11:21 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:11:21.479314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:11:21 compute-0 podman[245460]: 2026-01-21 14:11:21.520787886 +0000 UTC m=+1.204380897 container remove 3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pike, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:11:21 compute-0 systemd[1]: libpod-conmon-3f3a8f761e554a1e36420db010d10cc7df719322bc8c0c1fe02614942abf445e.scope: Deactivated successfully.
Jan 21 14:11:21 compute-0 sudo[245383]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:21 compute-0 ceph-mon[75031]: pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0", "format": "json"}]: dispatch
Jan 21 14:11:21 compute-0 ceph-mon[75031]: osdmap e129: 3 total, 3 up, 3 in
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/c17a5ed2-c845-4ea1-bc03-5533f6ecbf92'.
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/.meta.tmp'
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/.meta.tmp' to config b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed/.meta'
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "format": "json"}]: dispatch
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 14:11:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 14:11:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:11:21 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:21 compute-0 sudo[245509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:11:21 compute-0 sudo[245509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:21 compute-0 sudo[245509]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:21 compute-0 sudo[245534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:11:21 compute-0 sudo[245534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.062383533 +0000 UTC m=+0.063119559 container create 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:11:22 compute-0 systemd[1]: Started libpod-conmon-3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5.scope.
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.033752282 +0000 UTC m=+0.034488388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:11:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.148511754 +0000 UTC m=+0.149247820 container init 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.160352897 +0000 UTC m=+0.161088943 container start 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.164363807 +0000 UTC m=+0.165099863 container attach 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:11:22 compute-0 determined_fermi[245586]: 167 167
Jan 21 14:11:22 compute-0 systemd[1]: libpod-3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5.scope: Deactivated successfully.
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.168948441 +0000 UTC m=+0.169684497 container died 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a881eff8c77ea85eaf46606867073bf1b0c3c6fcb9bdea410e454e0db06db7f-merged.mount: Deactivated successfully.
Jan 21 14:11:22 compute-0 podman[245570]: 2026-01-21 14:11:22.214973795 +0000 UTC m=+0.215709831 container remove 3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_fermi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:11:22 compute-0 systemd[1]: libpod-conmon-3415356ff3dfc5cc73ca4fe4126b3abe979677411aa1639318215b7a793f89a5.scope: Deactivated successfully.
Jan 21 14:11:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 416 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.409662912 +0000 UTC m=+0.050567667 container create 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 14:11:22 compute-0 systemd[1]: Started libpod-conmon-06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b.scope.
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.382340274 +0000 UTC m=+0.023245059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:11:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.506651652 +0000 UTC m=+0.147556407 container init 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.513294017 +0000 UTC m=+0.154198772 container start 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.518172039 +0000 UTC m=+0.159076794 container attach 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:11:22 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:22 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "format": "json"}]: dispatch
Jan 21 14:11:22 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:22 compute-0 romantic_swartz[245626]: {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:     "0": [
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:         {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "devices": [
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "/dev/loop3"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             ],
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_name": "ceph_lv0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_size": "21470642176",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "name": "ceph_lv0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "tags": {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cluster_name": "ceph",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.crush_device_class": "",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.encrypted": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.objectstore": "bluestore",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osd_id": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.type": "block",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.vdo": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.with_tpm": "0"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             },
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "type": "block",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "vg_name": "ceph_vg0"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:         }
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:     ],
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:     "1": [
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:         {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "devices": [
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "/dev/loop4"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             ],
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_name": "ceph_lv1",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_size": "21470642176",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "name": "ceph_lv1",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "tags": {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cluster_name": "ceph",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.crush_device_class": "",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.encrypted": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.objectstore": "bluestore",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osd_id": "1",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.type": "block",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.vdo": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.with_tpm": "0"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             },
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "type": "block",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "vg_name": "ceph_vg1"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:         }
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:     ],
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:     "2": [
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:         {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "devices": [
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "/dev/loop5"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             ],
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_name": "ceph_lv2",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_size": "21470642176",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "name": "ceph_lv2",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "tags": {
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.cluster_name": "ceph",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.crush_device_class": "",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.encrypted": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.objectstore": "bluestore",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osd_id": "2",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.type": "block",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.vdo": "0",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:                 "ceph.with_tpm": "0"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             },
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "type": "block",
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:             "vg_name": "ceph_vg2"
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:         }
Jan 21 14:11:22 compute-0 romantic_swartz[245626]:     ]
Jan 21 14:11:22 compute-0 romantic_swartz[245626]: }
Jan 21 14:11:22 compute-0 systemd[1]: libpod-06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b.scope: Deactivated successfully.
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.839691867 +0000 UTC m=+0.480596692 container died 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d6181a8334bec669f7ceb09c155a40b85d334a01fb720d8278db13bac90710-merged.mount: Deactivated successfully.
Jan 21 14:11:22 compute-0 podman[245610]: 2026-01-21 14:11:22.885386983 +0000 UTC m=+0.526291758 container remove 06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swartz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 14:11:22 compute-0 systemd[1]: libpod-conmon-06629b1bb8b812bf799700ca692bb4ec4b2c3704546d0e5bdc2ff2dcdd70482b.scope: Deactivated successfully.
Jan 21 14:11:22 compute-0 sudo[245534]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:11:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443226209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:11:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:11:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443226209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:11:23 compute-0 sudo[245646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:11:23 compute-0 sudo[245646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:23 compute-0 sudo[245646]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:23 compute-0 sudo[245671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:11:23 compute-0 sudo[245671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.386357641 +0000 UTC m=+0.037950074 container create 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:11:23 compute-0 systemd[1]: Started libpod-conmon-854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77.scope.
Jan 21 14:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.46200194 +0000 UTC m=+0.113594463 container init 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.372300352 +0000 UTC m=+0.023892805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.46845778 +0000 UTC m=+0.120050213 container start 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.472489591 +0000 UTC m=+0.124082114 container attach 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:11:23 compute-0 focused_wu[245726]: 167 167
Jan 21 14:11:23 compute-0 systemd[1]: libpod-854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77.scope: Deactivated successfully.
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.473582328 +0000 UTC m=+0.125174771 container died 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 14:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eef203bbacf5ad34bfd76daf38eb8c5e54d036f3b09bf5e34ad951ca22a2dc4-merged.mount: Deactivated successfully.
Jan 21 14:11:23 compute-0 podman[245709]: 2026-01-21 14:11:23.511644484 +0000 UTC m=+0.163236917 container remove 854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_wu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:11:23 compute-0 systemd[1]: libpod-conmon-854041e2574c487b9c31c23e8b8212fa3d58f836f34985bb5d35cae33c45aa77.scope: Deactivated successfully.
Jan 21 14:11:23 compute-0 ceph-mon[75031]: pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 416 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1443226209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:11:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1443226209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:11:23 compute-0 podman[245751]: 2026-01-21 14:11:23.721357695 +0000 UTC m=+0.056807273 container create 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:11:23 compute-0 systemd[1]: Started libpod-conmon-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope.
Jan 21 14:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:11:23 compute-0 podman[245751]: 2026-01-21 14:11:23.704363122 +0000 UTC m=+0.039812730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:11:23 compute-0 podman[245751]: 2026-01-21 14:11:23.817126514 +0000 UTC m=+0.152576172 container init 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:11:23 compute-0 podman[245751]: 2026-01-21 14:11:23.824897267 +0000 UTC m=+0.160346865 container start 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:11:23 compute-0 podman[245751]: 2026-01-21 14:11:23.828706202 +0000 UTC m=+0.164155860 container attach 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 14:11:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:24 compute-0 lvm[245846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:11:24 compute-0 lvm[245846]: VG ceph_vg0 finished
Jan 21 14:11:24 compute-0 lvm[245847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:11:24 compute-0 lvm[245847]: VG ceph_vg1 finished
Jan 21 14:11:24 compute-0 lvm[245849]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:11:24 compute-0 lvm[245849]: VG ceph_vg2 finished
Jan 21 14:11:24 compute-0 wonderful_wing[245768]: {}
Jan 21 14:11:24 compute-0 systemd[1]: libpod-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope: Deactivated successfully.
Jan 21 14:11:24 compute-0 systemd[1]: libpod-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope: Consumed 1.305s CPU time.
Jan 21 14:11:24 compute-0 podman[245751]: 2026-01-21 14:11:24.628182578 +0000 UTC m=+0.963632176 container died 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9cc37da6250c37e36714bdad7b59b554e6cce230ca66b3e06398e7b1eb7e38-merged.mount: Deactivated successfully.
Jan 21 14:11:24 compute-0 podman[245751]: 2026-01-21 14:11:24.673056603 +0000 UTC m=+1.008506191 container remove 51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:11:24 compute-0 systemd[1]: libpod-conmon-51f0eee18522f1b2241e22c5d83d2110996a67cb2700ee78bdafe9b97c4a17b0.scope: Deactivated successfully.
Jan 21 14:11:24 compute-0 sudo[245671]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:11:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:11:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:11:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:11:24 compute-0 sudo[245863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:11:24 compute-0 sudo[245863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:11:24 compute-0 sudo[245863]: pam_unix(sudo:session): session closed for user root
Jan 21 14:11:25 compute-0 ceph-mon[75031]: pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:11:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:11:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518", "format": "json"}]: dispatch
Jan 21 14:11:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:11:28 compute-0 ceph-mon[75031]: pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/f9253af6-64bc-4ad7-b4bd-56feef7fa9fe'.
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp'
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp' to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta'
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "format": "json"}]: dispatch
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:11:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:11:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:11:28 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518", "format": "json"}]: dispatch
Jan 21 14:11:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:11:29 compute-0 ceph-mon[75031]: pgmap v924: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 11 KiB/s wr, 4 op/s
Jan 21 14:11:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "format": "json"}]: dispatch
Jan 21 14:11:29 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:11:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 3 op/s
Jan 21 14:11:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:31 compute-0 ceph-mon[75031]: pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 3 op/s
Jan 21 14:11:31 compute-0 podman[245889]: 2026-01-21 14:11:31.359800971 +0000 UTC m=+0.086163530 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 21 14:11:31 compute-0 podman[245888]: 2026-01-21 14:11:31.371839715 +0000 UTC m=+0.098120102 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 21 14:11:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Jan 21 14:11:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1", "format": "json"}]: dispatch
Jan 21 14:11:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:11:33 compute-0 ceph-mon[75031]: pgmap v926: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Jan 21 14:11:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:11:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce", "format": "json"}]: dispatch
Jan 21 14:11:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:11:33.902 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:11:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:11:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:11:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:11:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:11:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s wr, 2 op/s
Jan 21 14:11:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1", "format": "json"}]: dispatch
Jan 21 14:11:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce", "format": "json"}]: dispatch
Jan 21 14:11:35 compute-0 ceph-mon[75031]: pgmap v927: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s wr, 2 op/s
Jan 21 14:11:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 3 op/s
Jan 21 14:11:36 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:11:36.919 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:11:36 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:11:36.920 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:11:37 compute-0 ceph-mon[75031]: pgmap v928: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 3 op/s
Jan 21 14:11:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 14:11:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe", "format": "json"}]: dispatch
Jan 21 14:11:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:38 compute-0 nova_compute[239261]: 2026-01-21 14:11:38.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:38 compute-0 nova_compute[239261]: 2026-01-21 14:11:38.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:11:38 compute-0 nova_compute[239261]: 2026-01-21 14:11:38.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:11:38 compute-0 nova_compute[239261]: 2026-01-21 14:11:38.753 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:11:39 compute-0 ceph-mon[75031]: pgmap v929: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 14:11:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe", "format": "json"}]: dispatch
Jan 21 14:11:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:11:39
Jan 21 14:11:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:11:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:11:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta', '.mgr']
Jan 21 14:11:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:11:39 compute-0 nova_compute[239261]: 2026-01-21 14:11:39.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.128 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:11:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 21 14:11:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7", "format": "json"}]: dispatch
Jan 21 14:11:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:11:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152294106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.664 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.803 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.805 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5117MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.805 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:11:40 compute-0 nova_compute[239261]: 2026-01-21 14:11:40.805 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:11:40 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:11:40.921 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:11:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:11:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:11:41 compute-0 ceph-mon[75031]: pgmap v930: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 2 op/s
Jan 21 14:11:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7", "format": "json"}]: dispatch
Jan 21 14:11:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3152294106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:11:41 compute-0 nova_compute[239261]: 2026-01-21 14:11:41.512 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:11:41 compute-0 nova_compute[239261]: 2026-01-21 14:11:41.513 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:11:41 compute-0 nova_compute[239261]: 2026-01-21 14:11:41.537 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:11:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:11:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2480374122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:11:42 compute-0 nova_compute[239261]: 2026-01-21 14:11:42.087 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:11:42 compute-0 nova_compute[239261]: 2026-01-21 14:11:42.094 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:11:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 14:11:42 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2480374122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:11:42 compute-0 nova_compute[239261]: 2026-01-21 14:11:42.737 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:11:42 compute-0 nova_compute[239261]: 2026-01-21 14:11:42.740 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:11:42 compute-0 nova_compute[239261]: 2026-01-21 14:11:42.740 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:11:43 compute-0 ceph-mon[75031]: pgmap v931: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 14:11:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.767 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.768 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.768 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:11:44 compute-0 nova_compute[239261]: 2026-01-21 14:11:44.768 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:11:44 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:44 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:97526c93-fc84-45f0-b580-04d89d51b5a7, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:45 compute-0 ceph-mon[75031]: pgmap v932: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s wr, 1 op/s
Jan 21 14:11:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Jan 21 14:11:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7_7af6d476-9e96-455d-901d-cb117be73224", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "97526c93-fc84-45f0-b580-04d89d51b5a7", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:47 compute-0 ceph-mon[75031]: pgmap v933: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Jan 21 14:11:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Jan 21 14:11:49 compute-0 ceph-mon[75031]: pgmap v934: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662229856041607 of space, bias 1.0, pg target 0.1998668956812482 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4002298155263771e-05 of space, bias 4.0, pg target 0.016802757786316524 quantized to 16 (current 16)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:11:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:11:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 21 14:11:51 compute-0 ceph-mon[75031]: pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:11:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 2 op/s
Jan 21 14:11:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 21 14:11:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 21 14:11:52 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 21 14:11:53 compute-0 ceph-mon[75031]: pgmap v936: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 2 op/s
Jan 21 14:11:53 compute-0 ceph-mon[75031]: osdmap e130: 3 total, 3 up, 3 in
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 13 KiB/s wr, 2 op/s
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d00da2a0-c417-42b5-bf93-02a64cbb16fe, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:55 compute-0 ceph-mon[75031]: pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 13 KiB/s wr, 2 op/s
Jan 21 14:11:55 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe_3545ae94-f727-47c4-a7fd-1c526ecea0fa", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:55 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "d00da2a0-c417-42b5-bf93-02a64cbb16fe", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:11:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:11:57 compute-0 ceph-mon[75031]: pgmap v939: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:11:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce", "force": true, "format": "json"}]: dispatch
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:11:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:11:59 compute-0 ceph-mon[75031]: pgmap v940: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp'
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp' to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta'
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp'
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta.tmp' to config b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0/.meta'
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:18fb6d14-013d-43de-a247-048a332ec2b1, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:12:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 20 KiB/s wr, 3 op/s
Jan 21 14:12:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce_13d8e771-64b1-4720-ac35-75306a2796ca", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "ef40f7be-1cf7-4119-b7a9-71eb5b9dc8ce", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 21 14:12:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 21 14:12:01 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 21 14:12:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1_b687d358-d36c-4697-89e0-1aa237110732", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "snap_name": "18fb6d14-013d-43de-a247-048a332ec2b1", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:01 compute-0 ceph-mon[75031]: pgmap v941: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 20 KiB/s wr, 3 op/s
Jan 21 14:12:01 compute-0 ceph-mon[75031]: osdmap e131: 3 total, 3 up, 3 in
Jan 21 14:12:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 21 14:12:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 21 14:12:02 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 21 14:12:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s
Jan 21 14:12:02 compute-0 podman[245977]: 2026-01-21 14:12:02.327691637 +0000 UTC m=+0.052381777 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 14:12:02 compute-0 podman[245976]: 2026-01-21 14:12:02.359731618 +0000 UTC m=+0.086629842 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 21 14:12:03 compute-0 ceph-mon[75031]: osdmap e132: 3 total, 3 up, 3 in
Jan 21 14:12:03 compute-0 ceph-mon[75031]: pgmap v944: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:03 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:12:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:56d6dc8f-03ac-4a2a-b985-23defb122518, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "format": "json"}]: dispatch
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '901b9a46-d563-4a2c-bc82-2f893614e2f0' of type subvolume
Jan 21 14:12:04 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:04.064+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '901b9a46-d563-4a2c-bc82-2f893614e2f0' of type subvolume
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/901b9a46-d563-4a2c-bc82-2f893614e2f0'' moved to trashcan
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:901b9a46-d563-4a2c-bc82-2f893614e2f0, vol_name:cephfs) < ""
Jan 21 14:12:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 14:12:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518_61418a5a-530f-4665-9f42-37eaec4b7f3b", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "56d6dc8f-03ac-4a2a-b985-23defb122518", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:04 compute-0 ceph-mon[75031]: osdmap e133: 3 total, 3 up, 3 in
Jan 21 14:12:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "format": "json"}]: dispatch
Jan 21 14:12:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "901b9a46-d563-4a2c-bc82-2f893614e2f0", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:05 compute-0 ceph-mon[75031]: pgmap v946: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 14:12:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 43 KiB/s wr, 7 op/s
Jan 21 14:12:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 21 14:12:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 21 14:12:07 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 21 14:12:07 compute-0 ceph-mon[75031]: pgmap v947: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 43 KiB/s wr, 7 op/s
Jan 21 14:12:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:12:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:12:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp'
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta.tmp' to config b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447/.meta'
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5e71a6b-b6f4-4c59-b979-36f333691be0, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 659 B/s rd, 42 KiB/s wr, 7 op/s
Jan 21 14:12:08 compute-0 ceph-mon[75031]: osdmap e134: 3 total, 3 up, 3 in
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "format": "json"}]: dispatch
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:08 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:08.527+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'be026c8c-9a77-4436-9eb0-bd80e75652ed' of type subvolume
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'be026c8c-9a77-4436-9eb0-bd80e75652ed' of type subvolume
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/be026c8c-9a77-4436-9eb0-bd80e75652ed'' moved to trashcan
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:12:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:be026c8c-9a77-4436-9eb0-bd80e75652ed, vol_name:cephfs) < ""
Jan 21 14:12:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0_a8b3e1a0-047e-4d57-b74a-72ec2981cdc4", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "snap_name": "c5e71a6b-b6f4-4c59-b979-36f333691be0", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:09 compute-0 ceph-mon[75031]: pgmap v949: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 659 B/s rd, 42 KiB/s wr, 7 op/s
Jan 21 14:12:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "format": "json"}]: dispatch
Jan 21 14:12:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "be026c8c-9a77-4436-9eb0-bd80e75652ed", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 51 KiB/s wr, 10 op/s
Jan 21 14:12:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:12:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:12:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:12:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:12:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:12:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:12:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 21 14:12:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 21 14:12:11 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 21 14:12:11 compute-0 ceph-mon[75031]: pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 51 KiB/s wr, 10 op/s
Jan 21 14:12:11 compute-0 ceph-mon[75031]: osdmap e135: 3 total, 3 up, 3 in
Jan 21 14:12:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 21 14:12:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 21 14:12:12 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "format": "json"}]: dispatch
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:12 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:12.129+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3ce0e74-c7d0-4049-ba17-7d4296160447' of type subvolume
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3ce0e74-c7d0-4049-ba17-7d4296160447' of type subvolume
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d3ce0e74-c7d0-4049-ba17-7d4296160447'' moved to trashcan
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3ce0e74-c7d0-4049-ba17-7d4296160447, vol_name:cephfs) < ""
Jan 21 14:12:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 6 op/s
Jan 21 14:12:13 compute-0 ceph-mon[75031]: osdmap e136: 3 total, 3 up, 3 in
Jan 21 14:12:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "format": "json"}]: dispatch
Jan 21 14:12:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3ce0e74-c7d0-4049-ba17-7d4296160447", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:13 compute-0 ceph-mon[75031]: pgmap v953: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 6 op/s
Jan 21 14:12:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 22 KiB/s wr, 5 op/s
Jan 21 14:12:15 compute-0 ceph-mon[75031]: pgmap v954: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 22 KiB/s wr, 5 op/s
Jan 21 14:12:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 36 KiB/s wr, 9 op/s
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/1282aaf8-9b49-40e2-a843-a3b0a737b268'.
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/.meta.tmp'
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/.meta.tmp' to config b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943/.meta'
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "format": "json"}]: dispatch
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 14:12:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 14:12:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:12:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:17 compute-0 ceph-mon[75031]: pgmap v955: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 36 KiB/s wr, 9 op/s
Jan 21 14:12:17 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 18 KiB/s wr, 4 op/s
Jan 21 14:12:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "format": "json"}]: dispatch
Jan 21 14:12:19 compute-0 ceph-mon[75031]: pgmap v956: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 18 KiB/s wr, 4 op/s
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea104d42-8223-4da1-870a-ba39917e4943", "format": "json"}]: dispatch
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea104d42-8223-4da1-870a-ba39917e4943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea104d42-8223-4da1-870a-ba39917e4943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:20.273+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea104d42-8223-4da1-870a-ba39917e4943' of type subvolume
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea104d42-8223-4da1-870a-ba39917e4943' of type subvolume
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea104d42-8223-4da1-870a-ba39917e4943'' moved to trashcan
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea104d42-8223-4da1-870a-ba39917e4943, vol_name:cephfs) < ""
Jan 21 14:12:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 444 B/s rd, 21 KiB/s wr, 5 op/s
Jan 21 14:12:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 21 14:12:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 21 14:12:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 21 14:12:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea104d42-8223-4da1-870a-ba39917e4943", "format": "json"}]: dispatch
Jan 21 14:12:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea104d42-8223-4da1-870a-ba39917e4943", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:21 compute-0 ceph-mon[75031]: pgmap v957: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 444 B/s rd, 21 KiB/s wr, 5 op/s
Jan 21 14:12:21 compute-0 ceph-mon[75031]: osdmap e137: 3 total, 3 up, 3 in
Jan 21 14:12:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 19 KiB/s wr, 4 op/s
Jan 21 14:12:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:12:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954126033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:12:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:12:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954126033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:12:24 compute-0 ceph-mon[75031]: pgmap v959: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 19 KiB/s wr, 4 op/s
Jan 21 14:12:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1954126033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:12:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1954126033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:12:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 19 KiB/s wr, 4 op/s
Jan 21 14:12:24 compute-0 sudo[246018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:12:24 compute-0 sudo[246018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:24 compute-0 sudo[246018]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:24 compute-0 sudo[246043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:12:24 compute-0 sudo[246043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:25 compute-0 sudo[246043]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:25 compute-0 ceph-mon[75031]: pgmap v960: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 19 KiB/s wr, 4 op/s
Jan 21 14:12:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:12:25 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:12:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:12:25 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:12:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:12:25 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:12:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:12:25 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:12:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:12:25 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:12:25 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:12:25 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:12:25 compute-0 sudo[246099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:12:25 compute-0 sudo[246099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:25 compute-0 sudo[246099]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:25 compute-0 sudo[246124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:12:25 compute-0 sudo[246124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.089773512 +0000 UTC m=+0.050075060 container create 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:12:26 compute-0 systemd[1]: Started libpod-conmon-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope.
Jan 21 14:12:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.062292303 +0000 UTC m=+0.022593871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.20376817 +0000 UTC m=+0.164069748 container init 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.210837253 +0000 UTC m=+0.171138801 container start 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:12:26 compute-0 modest_taussig[246178]: 167 167
Jan 21 14:12:26 compute-0 systemd[1]: libpod-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope: Deactivated successfully.
Jan 21 14:12:26 compute-0 conmon[246178]: conmon 3134018e7de34ea5acc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope/container/memory.events
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.227015766 +0000 UTC m=+0.187317334 container attach 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.228601815 +0000 UTC m=+0.188903373 container died 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-74fc27476b31014afcfedbd2bec6a309681f51f26fd176e2b6f611f325567d4a-merged.mount: Deactivated successfully.
Jan 21 14:12:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 3 op/s
Jan 21 14:12:26 compute-0 podman[246161]: 2026-01-21 14:12:26.377308819 +0000 UTC m=+0.337610397 container remove 3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:12:26 compute-0 systemd[1]: libpod-conmon-3134018e7de34ea5acc702eedb940c0c8490980eb535f1d870fa2f0027195369.scope: Deactivated successfully.
Jan 21 14:12:26 compute-0 podman[246205]: 2026-01-21 14:12:26.555660584 +0000 UTC m=+0.045253193 container create d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:26 compute-0 systemd[1]: Started libpod-conmon-d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4.scope.
Jan 21 14:12:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:26 compute-0 podman[246205]: 2026-01-21 14:12:26.536777875 +0000 UTC m=+0.026370484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:12:26 compute-0 podman[246205]: 2026-01-21 14:12:26.645596176 +0000 UTC m=+0.135188815 container init d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 14:12:26 compute-0 podman[246205]: 2026-01-21 14:12:26.65189694 +0000 UTC m=+0.141489539 container start d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 14:12:26 compute-0 podman[246205]: 2026-01-21 14:12:26.659176487 +0000 UTC m=+0.148769106 container attach d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:12:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:12:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:12:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:12:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:12:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:12:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:12:27 compute-0 angry_austin[246221]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:12:27 compute-0 angry_austin[246221]: --> All data devices are unavailable
Jan 21 14:12:27 compute-0 systemd[1]: libpod-d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4.scope: Deactivated successfully.
Jan 21 14:12:27 compute-0 podman[246205]: 2026-01-21 14:12:27.11978241 +0000 UTC m=+0.609375049 container died d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 21 14:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-968f5fb8e77cabe94a98fe92b982aab1107ac9d33330ea4802b990f03eade247-merged.mount: Deactivated successfully.
Jan 21 14:12:27 compute-0 podman[246205]: 2026-01-21 14:12:27.318446261 +0000 UTC m=+0.808038870 container remove d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 14:12:27 compute-0 systemd[1]: libpod-conmon-d29ca680d62c49d9145864719b881d5b61b9daf47b6773e90f3d098ad8d7e7e4.scope: Deactivated successfully.
Jan 21 14:12:27 compute-0 sudo[246124]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:27 compute-0 sudo[246251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:12:27 compute-0 sudo[246251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:27 compute-0 sudo[246251]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:27 compute-0 sudo[246276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:12:27 compute-0 sudo[246276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:27 compute-0 podman[246314]: 2026-01-21 14:12:27.711108189 +0000 UTC m=+0.018286327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:12:27 compute-0 podman[246314]: 2026-01-21 14:12:27.850968887 +0000 UTC m=+0.158146985 container create 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:27 compute-0 ceph-mon[75031]: pgmap v961: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 3 op/s
Jan 21 14:12:27 compute-0 systemd[1]: Started libpod-conmon-600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806.scope.
Jan 21 14:12:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:12:27 compute-0 podman[246314]: 2026-01-21 14:12:27.973648076 +0000 UTC m=+0.280826254 container init 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 14:12:27 compute-0 podman[246314]: 2026-01-21 14:12:27.980657547 +0000 UTC m=+0.287835655 container start 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 14:12:27 compute-0 nice_faraday[246330]: 167 167
Jan 21 14:12:27 compute-0 podman[246314]: 2026-01-21 14:12:27.985160607 +0000 UTC m=+0.292338715 container attach 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:12:27 compute-0 systemd[1]: libpod-600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806.scope: Deactivated successfully.
Jan 21 14:12:27 compute-0 podman[246314]: 2026-01-21 14:12:27.987717129 +0000 UTC m=+0.294895237 container died 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-81dcb4357e6fd6febef62fbf00e8a84581eb7faf931db1f65735a0df26f4260d-merged.mount: Deactivated successfully.
Jan 21 14:12:28 compute-0 podman[246314]: 2026-01-21 14:12:28.029948048 +0000 UTC m=+0.337126166 container remove 600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 14:12:28 compute-0 systemd[1]: libpod-conmon-600bbd39469322b011f693783db0659ebe5186d07f69575cb6c7edfa2d951806.scope: Deactivated successfully.
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.215373876 +0000 UTC m=+0.043284075 container create 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:12:28 compute-0 systemd[1]: Started libpod-conmon-391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6.scope.
Jan 21 14:12:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.278187716 +0000 UTC m=+0.106097945 container init 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.283734692 +0000 UTC m=+0.111644901 container start 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.287371701 +0000 UTC m=+0.115281930 container attach 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.196276441 +0000 UTC m=+0.024186700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:12:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 3 op/s
Jan 21 14:12:28 compute-0 frosty_easley[246370]: {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:     "0": [
Jan 21 14:12:28 compute-0 frosty_easley[246370]:         {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "devices": [
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "/dev/loop3"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             ],
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_name": "ceph_lv0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_size": "21470642176",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "name": "ceph_lv0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "tags": {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cluster_name": "ceph",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.crush_device_class": "",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.encrypted": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.objectstore": "bluestore",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osd_id": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.type": "block",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.vdo": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.with_tpm": "0"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             },
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "type": "block",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "vg_name": "ceph_vg0"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:         }
Jan 21 14:12:28 compute-0 frosty_easley[246370]:     ],
Jan 21 14:12:28 compute-0 frosty_easley[246370]:     "1": [
Jan 21 14:12:28 compute-0 frosty_easley[246370]:         {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "devices": [
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "/dev/loop4"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             ],
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_name": "ceph_lv1",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_size": "21470642176",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "name": "ceph_lv1",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "tags": {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cluster_name": "ceph",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.crush_device_class": "",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.encrypted": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.objectstore": "bluestore",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osd_id": "1",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.type": "block",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.vdo": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.with_tpm": "0"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             },
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "type": "block",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "vg_name": "ceph_vg1"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:         }
Jan 21 14:12:28 compute-0 frosty_easley[246370]:     ],
Jan 21 14:12:28 compute-0 frosty_easley[246370]:     "2": [
Jan 21 14:12:28 compute-0 frosty_easley[246370]:         {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "devices": [
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "/dev/loop5"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             ],
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_name": "ceph_lv2",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_size": "21470642176",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "name": "ceph_lv2",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "tags": {
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.cluster_name": "ceph",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.crush_device_class": "",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.encrypted": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.objectstore": "bluestore",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osd_id": "2",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.type": "block",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.vdo": "0",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:                 "ceph.with_tpm": "0"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             },
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "type": "block",
Jan 21 14:12:28 compute-0 frosty_easley[246370]:             "vg_name": "ceph_vg2"
Jan 21 14:12:28 compute-0 frosty_easley[246370]:         }
Jan 21 14:12:28 compute-0 frosty_easley[246370]:     ]
Jan 21 14:12:28 compute-0 frosty_easley[246370]: }
Jan 21 14:12:28 compute-0 systemd[1]: libpod-391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6.scope: Deactivated successfully.
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.571115755 +0000 UTC m=+0.399025974 container died 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 14:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fed074985d419b903dce009984ca748b8db5435aa2a7e314c63a6e9d013087ae-merged.mount: Deactivated successfully.
Jan 21 14:12:28 compute-0 podman[246354]: 2026-01-21 14:12:28.62017012 +0000 UTC m=+0.448080329 container remove 391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:12:28 compute-0 systemd[1]: libpod-conmon-391fed2ffb0536f8076b7496e504fd384666d3ca0a10cc541c245bb12fa094b6.scope: Deactivated successfully.
Jan 21 14:12:28 compute-0 sudo[246276]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:28 compute-0 sudo[246390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:12:28 compute-0 sudo[246390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:28 compute-0 sudo[246390]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:28 compute-0 sudo[246415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:12:28 compute-0 sudo[246415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.079680037 +0000 UTC m=+0.044936266 container create 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:12:29 compute-0 systemd[1]: Started libpod-conmon-136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb.scope.
Jan 21 14:12:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.143214414 +0000 UTC m=+0.108470673 container init 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.149698653 +0000 UTC m=+0.114954882 container start 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.152837499 +0000 UTC m=+0.118093788 container attach 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 14:12:29 compute-0 elegant_morse[246468]: 167 167
Jan 21 14:12:29 compute-0 systemd[1]: libpod-136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb.scope: Deactivated successfully.
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.059763742 +0000 UTC m=+0.025020001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.154746545 +0000 UTC m=+0.120002794 container died 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9241a76fd878c73f06d339b0c286ef12a10008a7a59c55087277a10bf40b9126-merged.mount: Deactivated successfully.
Jan 21 14:12:29 compute-0 podman[246452]: 2026-01-21 14:12:29.19841078 +0000 UTC m=+0.163667029 container remove 136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 14:12:29 compute-0 systemd[1]: libpod-conmon-136421b5f42096bb1b2d539553fdf10e0f590431d0e83563bfc09b98bcad2ffb.scope: Deactivated successfully.
Jan 21 14:12:29 compute-0 podman[246492]: 2026-01-21 14:12:29.390608033 +0000 UTC m=+0.039146185 container create 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 14:12:29 compute-0 systemd[1]: Started libpod-conmon-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope.
Jan 21 14:12:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:12:29 compute-0 podman[246492]: 2026-01-21 14:12:29.373996248 +0000 UTC m=+0.022534420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:12:29 compute-0 podman[246492]: 2026-01-21 14:12:29.478402532 +0000 UTC m=+0.126940784 container init 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:12:29 compute-0 podman[246492]: 2026-01-21 14:12:29.494402952 +0000 UTC m=+0.142941144 container start 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:12:29 compute-0 podman[246492]: 2026-01-21 14:12:29.49883688 +0000 UTC m=+0.147375042 container attach 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:29 compute-0 ceph-mon[75031]: pgmap v962: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 3 op/s
Jan 21 14:12:30 compute-0 lvm[246586]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:12:30 compute-0 lvm[246586]: VG ceph_vg0 finished
Jan 21 14:12:30 compute-0 lvm[246588]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:12:30 compute-0 lvm[246588]: VG ceph_vg1 finished
Jan 21 14:12:30 compute-0 lvm[246590]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:12:30 compute-0 lvm[246590]: VG ceph_vg2 finished
Jan 21 14:12:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 21 14:12:30 compute-0 practical_dhawan[246508]: {}
Jan 21 14:12:30 compute-0 systemd[1]: libpod-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope: Deactivated successfully.
Jan 21 14:12:30 compute-0 systemd[1]: libpod-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope: Consumed 1.398s CPU time.
Jan 21 14:12:30 compute-0 podman[246492]: 2026-01-21 14:12:30.375261835 +0000 UTC m=+1.023799987 container died 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-444ef7a6efef35fb32d6385acc0f8a538b590571a84c57728e5fb81a497ba0cd-merged.mount: Deactivated successfully.
Jan 21 14:12:30 compute-0 podman[246492]: 2026-01-21 14:12:30.424531166 +0000 UTC m=+1.073069318 container remove 4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 21 14:12:30 compute-0 systemd[1]: libpod-conmon-4bd6607022c2c395c3b5b2f447636def3a1941dfbaadfb0a09e18d8afb9dfbf9.scope: Deactivated successfully.
Jan 21 14:12:30 compute-0 sudo[246415]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:12:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:12:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:12:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:12:30 compute-0 sudo[246604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:12:30 compute-0 sudo[246604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:12:30 compute-0 sudo[246604]: pam_unix(sudo:session): session closed for user root
Jan 21 14:12:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/1fba2d90-325a-4d51-8a85-96a3d9d56a0b'.
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/.meta.tmp'
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/.meta.tmp' to config b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823/.meta'
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "format": "json"}]: dispatch
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 14:12:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 14:12:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:12:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:31 compute-0 ceph-mon[75031]: pgmap v963: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 16 KiB/s wr, 2 op/s
Jan 21 14:12:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:12:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:12:31 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:12:33 compute-0 podman[246630]: 2026-01-21 14:12:33.351391003 +0000 UTC m=+0.072223341 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:12:33 compute-0 podman[246629]: 2026-01-21 14:12:33.38740824 +0000 UTC m=+0.108581066 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 21 14:12:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:12:33.902 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:12:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:12:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:12:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:12:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:12:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:12:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "format": "json"}]: dispatch
Jan 21 14:12:35 compute-0 ceph-mon[75031]: pgmap v964: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:12:35 compute-0 ceph-mon[75031]: pgmap v965: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 2 op/s
Jan 21 14:12:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "format": "json"}]: dispatch
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:672756b3-d8dc-429b-8b05-6a6f7934e823, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:672756b3-d8dc-429b-8b05-6a6f7934e823, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:36 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:36.398+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '672756b3-d8dc-429b-8b05-6a6f7934e823' of type subvolume
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '672756b3-d8dc-429b-8b05-6a6f7934e823' of type subvolume
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/672756b3-d8dc-429b-8b05-6a6f7934e823'' moved to trashcan
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:12:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:672756b3-d8dc-429b-8b05-6a6f7934e823, vol_name:cephfs) < ""
Jan 21 14:12:37 compute-0 ceph-mon[75031]: pgmap v966: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 14:12:37 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "format": "json"}]: dispatch
Jan 21 14:12:37 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "672756b3-d8dc-429b-8b05-6a6f7934e823", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 0 op/s
Jan 21 14:12:39 compute-0 ceph-mon[75031]: pgmap v967: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 0 op/s
Jan 21 14:12:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:12:39
Jan 21 14:12:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:12:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:12:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', 'images']
Jan 21 14:12:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:12:39 compute-0 nova_compute[239261]: 2026-01-21 14:12:39.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:39 compute-0 nova_compute[239261]: 2026-01-21 14:12:39.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:12:39 compute-0 nova_compute[239261]: 2026-01-21 14:12:39.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:12:39 compute-0 nova_compute[239261]: 2026-01-21 14:12:39.758 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:12:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 17 KiB/s wr, 2 op/s
Jan 21 14:12:40 compute-0 nova_compute[239261]: 2026-01-21 14:12:40.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:12:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:12:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:12:41 compute-0 ceph-mon[75031]: pgmap v968: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 17 KiB/s wr, 2 op/s
Jan 21 14:12:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Jan 21 14:12:43 compute-0 nova_compute[239261]: 2026-01-21 14:12:43.436 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:12:43 compute-0 nova_compute[239261]: 2026-01-21 14:12:43.437 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:12:43 compute-0 nova_compute[239261]: 2026-01-21 14:12:43.437 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:12:43 compute-0 nova_compute[239261]: 2026-01-21 14:12:43.437 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:12:43 compute-0 nova_compute[239261]: 2026-01-21 14:12:43.438 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:12:43 compute-0 ceph-mon[75031]: pgmap v969: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Jan 21 14:12:44 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:12:44.051 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:12:44 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:12:44.052 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:12:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:12:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466131476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:12:44 compute-0 nova_compute[239261]: 2026-01-21 14:12:44.154 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:12:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Jan 21 14:12:44 compute-0 nova_compute[239261]: 2026-01-21 14:12:44.315 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:12:44 compute-0 nova_compute[239261]: 2026-01-21 14:12:44.316 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:12:44 compute-0 nova_compute[239261]: 2026-01-21 14:12:44.317 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:12:44 compute-0 nova_compute[239261]: 2026-01-21 14:12:44.317 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.002 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.003 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.025 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:45 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1466131476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/992722e6-fc0f-4dc3-97ca-752fee9b705f'.
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp'
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp' to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta'
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "format": "json"}]: dispatch
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:12:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:12:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641634475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.881 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.887 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.909 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.910 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:12:45 compute-0 nova_compute[239261]: 2026-01-21 14:12:45.911 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:12:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.911 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.912 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.913 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:12:46 compute-0 nova_compute[239261]: 2026-01-21 14:12:46.913 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:12:47 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:12:47.053 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:12:47 compute-0 ceph-mon[75031]: pgmap v970: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Jan 21 14:12:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "format": "json"}]: dispatch
Jan 21 14:12:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1641634475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:12:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 2 op/s
Jan 21 14:12:48 compute-0 ceph-mon[75031]: pgmap v971: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 14:12:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95", "format": "json"}]: dispatch
Jan 21 14:12:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:49 compute-0 ceph-mon[75031]: pgmap v972: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 2 op/s
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 3 op/s
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/4f12a03b-2b1c-4bba-a51f-c6afbf76db5e'.
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/.meta.tmp'
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/.meta.tmp' to config b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147/.meta'
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "format": "json"}]: dispatch
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 14:12:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:12:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666224662312277 of space, bias 1.0, pg target 0.1998673986936831 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.248075492805729e-05 of space, bias 4.0, pg target 0.03897690591366875 quantized to 16 (current 16)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:12:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:12:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95", "format": "json"}]: dispatch
Jan 21 14:12:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:12:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:12:51 compute-0 ceph-mon[75031]: pgmap v973: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 3 op/s
Jan 21 14:12:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "format": "json"}]: dispatch
Jan 21 14:12:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Jan 21 14:12:53 compute-0 ceph-mon[75031]: pgmap v974: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Jan 21 14:12:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp'
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp' to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta'
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp'
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta.tmp' to config b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29/.meta'
Jan 21 14:12:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3be0d8d-321f-4f19-926e-84f856a6aa95, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:55 compute-0 ceph-mon[75031]: pgmap v975: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Jan 21 14:12:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:12:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 2 op/s
Jan 21 14:12:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95_c7ae62df-b2fa-47d8-aba5-e6ef84f541d4", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "snap_name": "e3be0d8d-321f-4f19-926e-84f856a6aa95", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:57 compute-0 ceph-mon[75031]: pgmap v976: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 2 op/s
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 2 op/s
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "909aa505-0296-4e74-80ca-1c8370556d29", "format": "json"}]: dispatch
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:909aa505-0296-4e74-80ca-1c8370556d29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:909aa505-0296-4e74-80ca-1c8370556d29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:12:58 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:12:58.932+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '909aa505-0296-4e74-80ca-1c8370556d29' of type subvolume
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '909aa505-0296-4e74-80ca-1c8370556d29' of type subvolume
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "force": true, "format": "json"}]: dispatch
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/909aa505-0296-4e74-80ca-1c8370556d29'' moved to trashcan
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:12:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:909aa505-0296-4e74-80ca-1c8370556d29, vol_name:cephfs) < ""
Jan 21 14:13:00 compute-0 ceph-mon[75031]: pgmap v977: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 2 op/s
Jan 21 14:13:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 5 op/s
Jan 21 14:13:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "909aa505-0296-4e74-80ca-1c8370556d29", "format": "json"}]: dispatch
Jan 21 14:13:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "909aa505-0296-4e74-80ca-1c8370556d29", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:01 compute-0 ceph-mon[75031]: pgmap v978: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 5 op/s
Jan 21 14:13:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 21 14:13:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 21 14:13:02 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 21 14:13:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 14:13:03 compute-0 ceph-mon[75031]: osdmap e138: 3 total, 3 up, 3 in
Jan 21 14:13:03 compute-0 ceph-mon[75031]: pgmap v980: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 14:13:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 14:13:04 compute-0 podman[246722]: 2026-01-21 14:13:04.323071059 +0000 UTC m=+0.049232570 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:13:04 compute-0 podman[246721]: 2026-01-21 14:13:04.352444175 +0000 UTC m=+0.081693741 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 21 14:13:05 compute-0 ceph-mon[75031]: pgmap v981: 305 pgs: 305 active+clean; 43 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 28 KiB/s wr, 4 op/s
Jan 21 14:13:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:13:07 compute-0 ceph-mon[75031]: pgmap v982: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:13:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:13:08 compute-0 ceph-mon[75031]: pgmap v983: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:13:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:13:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:13:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:13:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:13:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:13:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:13:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 21 14:13:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 21 14:13:11 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 21 14:13:11 compute-0 ceph-mon[75031]: pgmap v984: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:11 compute-0 ceph-mon[75031]: osdmap e139: 3 total, 3 up, 3 in
Jan 21 14:13:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:13 compute-0 ceph-mon[75031]: pgmap v986: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:14 compute-0 ceph-mon[75031]: pgmap v987: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "format": "json"}]: dispatch
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:15 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:15.150+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1dd3a4c4-ba47-419f-88a7-3a23e3b00147' of type subvolume
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1dd3a4c4-ba47-419f-88a7-3a23e3b00147' of type subvolume
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1dd3a4c4-ba47-419f-88a7-3a23e3b00147'' moved to trashcan
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:13:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1dd3a4c4-ba47-419f-88a7-3a23e3b00147, vol_name:cephfs) < ""
Jan 21 14:13:16 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "format": "json"}]: dispatch
Jan 21 14:13:16 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1dd3a4c4-ba47-419f-88a7-3a23e3b00147", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s wr, 0 op/s
Jan 21 14:13:17 compute-0 ceph-mon[75031]: pgmap v988: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s wr, 0 op/s
Jan 21 14:13:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s wr, 0 op/s
Jan 21 14:13:19 compute-0 ceph-mon[75031]: pgmap v989: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s wr, 0 op/s
Jan 21 14:13:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:13:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/b0d2c918-ebda-4af3-87c7-5d6e78fc290b'.
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "format": "json"}]: dispatch
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:13:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:13:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:13:21 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:22 compute-0 ceph-mon[75031]: pgmap v990: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 9.6 KiB/s wr, 1 op/s
Jan 21 14:13:22 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 8.6 KiB/s wr, 1 op/s
Jan 21 14:13:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:13:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2798542946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:13:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:13:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2798542946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:13:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "format": "json"}]: dispatch
Jan 21 14:13:23 compute-0 ceph-mon[75031]: pgmap v991: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 8.6 KiB/s wr, 1 op/s
Jan 21 14:13:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2798542946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:13:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2798542946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:13:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 21 14:13:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "format": "json"}]: dispatch
Jan 21 14:13:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:13:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:13:25 compute-0 ceph-mon[75031]: pgmap v992: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 21 14:13:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "format": "json"}]: dispatch
Jan 21 14:13:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 14:13:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "target_sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:13:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, target_sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:27.734450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004807734477, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 255, "total_data_size": 2150262, "memory_usage": 2190096, "flush_reason": "Manual Compaction"}
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 21 14:13:27 compute-0 ceph-mon[75031]: pgmap v993: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 14:13:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "target_sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004807863491, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2116965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19672, "largest_seqno": 21128, "table_properties": {"data_size": 2109956, "index_size": 4017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15534, "raw_average_key_size": 20, "raw_value_size": 2095613, "raw_average_value_size": 2790, "num_data_blocks": 181, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004681, "oldest_key_time": 1769004681, "file_creation_time": 1769004807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 129138 microseconds, and 5045 cpu microseconds.
Jan 21 14:13:27 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:13:27 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/441def10-d72f-43de-9c5a-cb8d8d24291f'.
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:27.863545) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2116965 bytes OK
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:27.863603) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.070786) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.070837) EVENT_LOG_v1 {"time_micros": 1769004808070829, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.070859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2143624, prev total WAL file size 2143624, number of live WAL files 2.
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.071620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2067KB)], [47(7365KB)]
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808071672, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9659270, "oldest_snapshot_seqno": -1}
Jan 21 14:13:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4464 keys, 7879263 bytes, temperature: kUnknown
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808577784, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7879263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7848309, "index_size": 18684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 110692, "raw_average_key_size": 24, "raw_value_size": 7766515, "raw_average_value_size": 1739, "num_data_blocks": 780, "num_entries": 4464, "num_filter_entries": 4464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:13:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 14:13:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 14:13:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] tracking-id e2a84b3d-b747-462a-8827-a0dee34dcf5e for path b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8'
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.578061) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7879263 bytes
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.873523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.1 rd, 15.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4987, records dropped: 523 output_compression: NoCompression
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.873628) EVENT_LOG_v1 {"time_micros": 1769004808873602, "job": 24, "event": "compaction_finished", "compaction_time_micros": 506188, "compaction_time_cpu_micros": 17453, "output_level": 6, "num_output_files": 1, "total_output_size": 7879263, "num_input_records": 4987, "num_output_records": 4464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808874273, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004808876141, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.071532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:13:28 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:13:28.876207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, target_sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.041+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 6fd14f2b-0487-4f6b-a678-d4c00c894fd8)
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:29.471+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:29 compute-0 ceph-mon[75031]: pgmap v994: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 6fd14f2b-0487-4f6b-a678-d4c00c894fd8) -- by 0 seconds
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 14:13:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 14:13:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:30.040+0000 7fc4f7b4a640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.snap/09b94b8f-18fe-41bc-bc29-2dce63cc4501/b0d2c918-ebda-4af3-87c7-5d6e78fc290b' to b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/441def10-d72f-43de-9c5a-cb8d8d24291f'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] untracking e2a84b3d-b747-462a-8827-a0dee34dcf5e
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 4 op/s
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta.tmp' to config b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8/.meta'
Jan 21 14:13:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 6fd14f2b-0487-4f6b-a678-d4c00c894fd8)
Jan 21 14:13:30 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.tnwklj(active, since 28m)
Jan 21 14:13:30 compute-0 sudo[246799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:13:30 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:13:30 compute-0 sudo[246799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:30 compute-0 sudo[246799]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:30 compute-0 sudo[246824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:13:30 compute-0 sudo[246824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7fc5286a2bb0>
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:31 compute-0 sudo[246824]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:13:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:13:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:13:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:13:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:13:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:13:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:13:31 compute-0 sudo[246880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:13:31 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 18 completed events
Jan 21 14:13:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 14:13:31 compute-0 sudo[246880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:31 compute-0 sudo[246880]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:31 compute-0 sudo[246905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:13:31 compute-0 sudo[246905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:32 compute-0 ceph-mon[75031]: pgmap v995: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 4 op/s
Jan 21 14:13:32 compute-0 ceph-mon[75031]: mgrmap e14: compute-0.tnwklj(active, since 28m)
Jan 21 14:13:32 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:13:32 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:13:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 2 op/s
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.267224392 +0000 UTC m=+0.024314193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.476128492 +0000 UTC m=+0.233218273 container create 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:13:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:32 compute-0 systemd[1]: Started libpod-conmon-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope.
Jan 21 14:13:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.580038095 +0000 UTC m=+0.337127906 container init 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.593243396 +0000 UTC m=+0.350333177 container start 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.597206853 +0000 UTC m=+0.354296634 container attach 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 14:13:32 compute-0 musing_moser[246959]: 167 167
Jan 21 14:13:32 compute-0 systemd[1]: libpod-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope: Deactivated successfully.
Jan 21 14:13:32 compute-0 conmon[246959]: conmon 9320d1c1846641279fc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope/container/memory.events
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.60203875 +0000 UTC m=+0.359128541 container died 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 21 14:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a4c05cec54d077ee00350d1d61662e711fb37d050054c0c0ac0fea9992d4c51-merged.mount: Deactivated successfully.
Jan 21 14:13:32 compute-0 podman[246943]: 2026-01-21 14:13:32.699818533 +0000 UTC m=+0.456908314 container remove 9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_moser, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:13:32 compute-0 systemd[1]: libpod-conmon-9320d1c1846641279fc803d1b5deadc2434912cc672a7e01aa5dace5d21cbf62.scope: Deactivated successfully.
Jan 21 14:13:32 compute-0 podman[246982]: 2026-01-21 14:13:32.85280648 +0000 UTC m=+0.038223072 container create b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:13:32 compute-0 systemd[1]: Started libpod-conmon-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope.
Jan 21 14:13:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:32 compute-0 podman[246982]: 2026-01-21 14:13:32.836084273 +0000 UTC m=+0.021500885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:13:32 compute-0 podman[246982]: 2026-01-21 14:13:32.932957724 +0000 UTC m=+0.118374366 container init b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 21 14:13:32 compute-0 podman[246982]: 2026-01-21 14:13:32.947380315 +0000 UTC m=+0.132796917 container start b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:13:32 compute-0 podman[246982]: 2026-01-21 14:13:32.951890835 +0000 UTC m=+0.137307467 container attach b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 14:13:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:13:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:13:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:13:33 compute-0 ceph-mon[75031]: pgmap v996: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 2 op/s
Jan 21 14:13:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:33 compute-0 hopeful_goldstine[246998]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:13:33 compute-0 hopeful_goldstine[246998]: --> All data devices are unavailable
Jan 21 14:13:33 compute-0 systemd[1]: libpod-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope: Deactivated successfully.
Jan 21 14:13:33 compute-0 conmon[246998]: conmon b997ae0146ab2393b634 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope/container/memory.events
Jan 21 14:13:33 compute-0 podman[246982]: 2026-01-21 14:13:33.443278168 +0000 UTC m=+0.628694760 container died b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 14:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e612bd0759994a796c73252b62e44ffa423ad27ff40fbca804aff5ba0e22908f-merged.mount: Deactivated successfully.
Jan 21 14:13:33 compute-0 podman[246982]: 2026-01-21 14:13:33.5016194 +0000 UTC m=+0.687036032 container remove b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:13:33 compute-0 systemd[1]: libpod-conmon-b997ae0146ab2393b634d928c3446c400a7594b66b3a50f149a238f1da6c16c8.scope: Deactivated successfully.
Jan 21 14:13:33 compute-0 sudo[246905]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:33 compute-0 sudo[247032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:13:33 compute-0 sudo[247032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:33 compute-0 sudo[247032]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:33 compute-0 sudo[247057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:13:33 compute-0 sudo[247057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:13:33.903 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:13:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:13:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:13:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:13:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:13:33 compute-0 podman[247093]: 2026-01-21 14:13:33.972433702 +0000 UTC m=+0.043096281 container create d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:13:34 compute-0 systemd[1]: Started libpod-conmon-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope.
Jan 21 14:13:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:13:34 compute-0 podman[247093]: 2026-01-21 14:13:33.952853545 +0000 UTC m=+0.023516114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:13:34 compute-0 podman[247093]: 2026-01-21 14:13:34.053764564 +0000 UTC m=+0.124427153 container init d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:13:34 compute-0 podman[247093]: 2026-01-21 14:13:34.063461461 +0000 UTC m=+0.134124030 container start d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:13:34 compute-0 podman[247093]: 2026-01-21 14:13:34.067754065 +0000 UTC m=+0.138416634 container attach d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:13:34 compute-0 systemd[1]: libpod-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope: Deactivated successfully.
Jan 21 14:13:34 compute-0 upbeat_feistel[247109]: 167 167
Jan 21 14:13:34 compute-0 conmon[247109]: conmon d881508b713926b41115 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope/container/memory.events
Jan 21 14:13:34 compute-0 podman[247093]: 2026-01-21 14:13:34.069421806 +0000 UTC m=+0.140084385 container died d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 14:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3812bbefbffccfd575c551838b9b5382f1e1a952f2131a54892b139338e44904-merged.mount: Deactivated successfully.
Jan 21 14:13:34 compute-0 podman[247093]: 2026-01-21 14:13:34.109551544 +0000 UTC m=+0.180214113 container remove d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:13:34 compute-0 systemd[1]: libpod-conmon-d881508b713926b41115a159742323a83297057fb3753bcb72ac9cc47cfe9962.scope: Deactivated successfully.
Jan 21 14:13:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 24 KiB/s wr, 3 op/s
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.352149895 +0000 UTC m=+0.071070522 container create 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 14:13:34 compute-0 systemd[1]: Started libpod-conmon-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope.
Jan 21 14:13:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.322053202 +0000 UTC m=+0.040973929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.42378357 +0000 UTC m=+0.142704227 container init 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.431927319 +0000 UTC m=+0.150847956 container start 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.435875865 +0000 UTC m=+0.154796512 container attach 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:13:34 compute-0 podman[247153]: 2026-01-21 14:13:34.448311518 +0000 UTC m=+0.061088420 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 21 14:13:34 compute-0 podman[247150]: 2026-01-21 14:13:34.490345353 +0000 UTC m=+0.098725778 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 21 14:13:34 compute-0 adoring_bartik[247154]: {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:     "0": [
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:         {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "devices": [
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "/dev/loop3"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             ],
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_name": "ceph_lv0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_size": "21470642176",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "name": "ceph_lv0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "tags": {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cluster_name": "ceph",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.crush_device_class": "",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.encrypted": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.objectstore": "bluestore",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osd_id": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.type": "block",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.vdo": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.with_tpm": "0"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             },
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "type": "block",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "vg_name": "ceph_vg0"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:         }
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:     ],
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:     "1": [
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:         {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "devices": [
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "/dev/loop4"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             ],
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_name": "ceph_lv1",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_size": "21470642176",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "name": "ceph_lv1",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "tags": {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cluster_name": "ceph",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.crush_device_class": "",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.encrypted": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.objectstore": "bluestore",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osd_id": "1",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.type": "block",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.vdo": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.with_tpm": "0"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             },
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "type": "block",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "vg_name": "ceph_vg1"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:         }
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:     ],
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:     "2": [
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:         {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "devices": [
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "/dev/loop5"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             ],
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_name": "ceph_lv2",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_size": "21470642176",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "name": "ceph_lv2",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "tags": {
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.cluster_name": "ceph",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.crush_device_class": "",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.encrypted": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.objectstore": "bluestore",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osd_id": "2",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.type": "block",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.vdo": "0",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:                 "ceph.with_tpm": "0"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             },
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "type": "block",
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:             "vg_name": "ceph_vg2"
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:         }
Jan 21 14:13:34 compute-0 adoring_bartik[247154]:     ]
Jan 21 14:13:34 compute-0 adoring_bartik[247154]: }
Jan 21 14:13:34 compute-0 systemd[1]: libpod-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope: Deactivated successfully.
Jan 21 14:13:34 compute-0 conmon[247154]: conmon 3f6dd928f1072c74bcc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope/container/memory.events
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.779285003 +0000 UTC m=+0.498205640 container died 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-672a17c45a4369de27e47f8ee10473a299bbfac1c754da2de4027e63b79db30f-merged.mount: Deactivated successfully.
Jan 21 14:13:34 compute-0 podman[247136]: 2026-01-21 14:13:34.824834953 +0000 UTC m=+0.543755590 container remove 3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bartik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 14:13:34 compute-0 systemd[1]: libpod-conmon-3f6dd928f1072c74bcc1e22a9375fa52b0f4f39778369864bc9c6be6774a1b31.scope: Deactivated successfully.
Jan 21 14:13:34 compute-0 sudo[247057]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:34 compute-0 sudo[247217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:13:34 compute-0 sudo[247217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:34 compute-0 sudo[247217]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:34 compute-0 sudo[247242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:13:35 compute-0 sudo[247242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.273844194 +0000 UTC m=+0.042777343 container create 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:13:35 compute-0 systemd[1]: Started libpod-conmon-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope.
Jan 21 14:13:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.254143383 +0000 UTC m=+0.023076522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.355003322 +0000 UTC m=+0.123936501 container init 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.360980807 +0000 UTC m=+0.129913926 container start 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.365686522 +0000 UTC m=+0.134619641 container attach 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:13:35 compute-0 trusting_haslett[247296]: 167 167
Jan 21 14:13:35 compute-0 systemd[1]: libpod-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope: Deactivated successfully.
Jan 21 14:13:35 compute-0 conmon[247296]: conmon 545c8aa0c30b3db666f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope/container/memory.events
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.368677304 +0000 UTC m=+0.137610443 container died 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 21 14:13:35 compute-0 ceph-mon[75031]: pgmap v997: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 24 KiB/s wr, 3 op/s
Jan 21 14:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a2de0755e778fc10d9fcf737c017df97e1a1972b92a6496ea43cd0c145310b-merged.mount: Deactivated successfully.
Jan 21 14:13:35 compute-0 podman[247280]: 2026-01-21 14:13:35.410971955 +0000 UTC m=+0.179905104 container remove 545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_haslett, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:13:35 compute-0 systemd[1]: libpod-conmon-545c8aa0c30b3db666f9632e6b29466f2c401b4ffe23c6e76184042ccba8ae96.scope: Deactivated successfully.
Jan 21 14:13:35 compute-0 podman[247319]: 2026-01-21 14:13:35.585239272 +0000 UTC m=+0.051827514 container create 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:13:35 compute-0 systemd[1]: Started libpod-conmon-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope.
Jan 21 14:13:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:13:35 compute-0 podman[247319]: 2026-01-21 14:13:35.561697248 +0000 UTC m=+0.028285550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:13:35 compute-0 podman[247319]: 2026-01-21 14:13:35.65988472 +0000 UTC m=+0.126472982 container init 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 14:13:35 compute-0 podman[247319]: 2026-01-21 14:13:35.670907609 +0000 UTC m=+0.137495851 container start 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 14:13:35 compute-0 podman[247319]: 2026-01-21 14:13:35.674401274 +0000 UTC m=+0.140989706 container attach 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 21 14:13:35 compute-0 nova_compute[239261]: 2026-01-21 14:13:35.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 37 KiB/s wr, 6 op/s
Jan 21 14:13:36 compute-0 lvm[247414]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:13:36 compute-0 lvm[247414]: VG ceph_vg1 finished
Jan 21 14:13:36 compute-0 lvm[247413]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:13:36 compute-0 lvm[247413]: VG ceph_vg0 finished
Jan 21 14:13:36 compute-0 lvm[247416]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:13:36 compute-0 lvm[247416]: VG ceph_vg2 finished
Jan 21 14:13:36 compute-0 serene_lumiere[247335]: {}
Jan 21 14:13:36 compute-0 systemd[1]: libpod-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope: Deactivated successfully.
Jan 21 14:13:36 compute-0 systemd[1]: libpod-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope: Consumed 1.301s CPU time.
Jan 21 14:13:36 compute-0 podman[247419]: 2026-01-21 14:13:36.542439946 +0000 UTC m=+0.027048021 container died 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-94cdf10419b6cf904a40df76dfc002c89d54339e62e89807d8f091f33168b39f-merged.mount: Deactivated successfully.
Jan 21 14:13:36 compute-0 podman[247419]: 2026-01-21 14:13:36.583385173 +0000 UTC m=+0.067993238 container remove 11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 14:13:36 compute-0 systemd[1]: libpod-conmon-11956237dc8df0511bae2e2488e6d6c3ac9a3bb5a6a92e43cc7900f1cfe78c84.scope: Deactivated successfully.
Jan 21 14:13:36 compute-0 sudo[247242]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:13:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:13:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:36 compute-0 sudo[247434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:13:36 compute-0 sudo[247434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:13:36 compute-0 sudo[247434]: pam_unix(sudo:session): session closed for user root
Jan 21 14:13:37 compute-0 ceph-mon[75031]: pgmap v998: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 37 KiB/s wr, 6 op/s
Jan 21 14:13:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 27 KiB/s wr, 5 op/s
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/c098c762-a168-49c0-8e80-1871b71016e6'.
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "format": "json"}]: dispatch
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:13:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:13:39
Jan 21 14:13:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:13:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:13:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'volumes']
Jan 21 14:13:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:13:39 compute-0 ceph-mon[75031]: pgmap v999: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 27 KiB/s wr, 5 op/s
Jan 21 14:13:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:39 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 37 KiB/s wr, 7 op/s
Jan 21 14:13:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "format": "json"}]: dispatch
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:13:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:13:41 compute-0 ceph-mon[75031]: pgmap v1000: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 37 KiB/s wr, 7 op/s
Jan 21 14:13:41 compute-0 nova_compute[239261]: 2026-01-21 14:13:41.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:41 compute-0 nova_compute[239261]: 2026-01-21 14:13:41.742 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:13:41 compute-0 nova_compute[239261]: 2026-01-21 14:13:41.742 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:13:41 compute-0 nova_compute[239261]: 2026-01-21 14:13:41.755 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/074547f2-7f4b-4646-af87-b0582d94198e'.
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/.meta.tmp'
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/.meta.tmp' to config b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34/.meta'
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "format": "json"}]: dispatch
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 14:13:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 14:13:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:13:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c", "format": "json"}]: dispatch
Jan 21 14:13:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Jan 21 14:13:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:42 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:42 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "format": "json"}]: dispatch
Jan 21 14:13:42 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.755 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.755 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:13:42 compute-0 nova_compute[239261]: 2026-01-21 14:13:42.756 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:13:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:13:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616145266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.310 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.477 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.479 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.480 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.480 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:13:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c", "format": "json"}]: dispatch
Jan 21 14:13:43 compute-0 ceph-mon[75031]: pgmap v1001: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Jan 21 14:13:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2616145266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.729 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.729 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.803 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.880 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.880 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.903 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.930 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 21 14:13:43 compute-0 nova_compute[239261]: 2026-01-21 14:13:43.954 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:13:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Jan 21 14:13:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:13:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031350241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:13:44 compute-0 nova_compute[239261]: 2026-01-21 14:13:44.482 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:13:44 compute-0 nova_compute[239261]: 2026-01-21 14:13:44.486 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:13:44 compute-0 nova_compute[239261]: 2026-01-21 14:13:44.563 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:13:44 compute-0 nova_compute[239261]: 2026-01-21 14:13:44.564 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:13:44 compute-0 nova_compute[239261]: 2026-01-21 14:13:44.564 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:13:44 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3031350241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:13:45 compute-0 nova_compute[239261]: 2026-01-21 14:13:45.565 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:45 compute-0 nova_compute[239261]: 2026-01-21 14:13:45.565 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:45 compute-0 nova_compute[239261]: 2026-01-21 14:13:45.566 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:45 compute-0 nova_compute[239261]: 2026-01-21 14:13:45.566 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:13:45 compute-0 ceph-mon[75031]: pgmap v1002: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 14:13:45 compute-0 nova_compute[239261]: 2026-01-21 14:13:45.720 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/4132ab36-79f1-480e-a2a7-55e9bc4b49be'.
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/.meta.tmp'
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/.meta.tmp' to config b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac/.meta'
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "format": "json"}]: dispatch
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 14:13:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 14:13:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:13:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 31 KiB/s wr, 6 op/s
Jan 21 14:13:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "format": "json"}]: dispatch
Jan 21 14:13:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:46 compute-0 nova_compute[239261]: 2026-01-21 14:13:46.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:46 compute-0 nova_compute[239261]: 2026-01-21 14:13:46.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:46 compute-0 nova_compute[239261]: 2026-01-21 14:13:46.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 21 14:13:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7", "format": "json"}]: dispatch
Jan 21 14:13:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:47 compute-0 ceph-mon[75031]: pgmap v1003: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 31 KiB/s wr, 6 op/s
Jan 21 14:13:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7", "format": "json"}]: dispatch
Jan 21 14:13:47 compute-0 nova_compute[239261]: 2026-01-21 14:13:47.790 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:47 compute-0 nova_compute[239261]: 2026-01-21 14:13:47.810 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 2 op/s
Jan 21 14:13:48 compute-0 nova_compute[239261]: 2026-01-21 14:13:48.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:13:48 compute-0 nova_compute[239261]: 2026-01-21 14:13:48.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 21 14:13:48 compute-0 nova_compute[239261]: 2026-01-21 14:13:48.740 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 21 14:13:49 compute-0 ceph-mon[75031]: pgmap v1004: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 2 op/s
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 4 op/s
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662142449868504 of space, bias 1.0, pg target 0.19986427349605512 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.157038733433985e-05 of space, bias 4.0, pg target 0.06188446480120782 quantized to 16 (current 16)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:13:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:13:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-mon[75031]: pgmap v1005: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 4 op/s
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "format": "json"}]: dispatch
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:51.922+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6321f0ab-1903-4b13-841b-f76cfd9c3cac' of type subvolume
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6321f0ab-1903-4b13-841b-f76cfd9c3cac' of type subvolume
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6321f0ab-1903-4b13-841b-f76cfd9c3cac'' moved to trashcan
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:13:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6321f0ab-1903-4b13-841b-f76cfd9c3cac, vol_name:cephfs) < ""
Jan 21 14:13:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 21 14:13:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7_dabc2c37-27ba-4a51-926a-eca273cf108c", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "c7c0cfbe-1a2d-4759-861e-1f5bef8b3de7", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "format": "json"}]: dispatch
Jan 21 14:13:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6321f0ab-1903-4b13-841b-f76cfd9c3cac", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:53 compute-0 ceph-mon[75031]: pgmap v1006: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 21 14:13:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp'
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta.tmp' to config b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0/.meta'
Jan 21 14:13:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1069ca7-a7f5-4b7d-93eb-79908004053c, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:55 compute-0 ceph-mon[75031]: pgmap v1007: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Jan 21 14:13:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/3947d664-f79c-4559-a25c-0c3cb75d8faa'.
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/.meta.tmp'
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/.meta.tmp' to config b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca/.meta'
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "format": "json"}]: dispatch
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 14:13:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 14:13:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:13:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 21 14:13:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 21 14:13:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c_0d157e9e-6dbe-4da5-910a-8205b873e355", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "snap_name": "f1069ca7-a7f5-4b7d-93eb-79908004053c", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:13:56 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 21 14:13:57 compute-0 ceph-mon[75031]: pgmap v1008: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Jan 21 14:13:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:13:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "format": "json"}]: dispatch
Jan 21 14:13:57 compute-0 ceph-mon[75031]: osdmap e140: 3 total, 3 up, 3 in
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 35 KiB/s wr, 5 op/s
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "format": "json"}]: dispatch
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:13:58 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:13:58.575+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0' of type subvolume
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0' of type subvolume
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "force": true, "format": "json"}]: dispatch
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0'' moved to trashcan
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:13:58 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0, vol_name:cephfs) < ""
Jan 21 14:13:59 compute-0 ceph-mon[75031]: pgmap v1010: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 35 KiB/s wr, 5 op/s
Jan 21 14:13:59 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "format": "json"}]: dispatch
Jan 21 14:13:59 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bba0c3cf-adf3-468e-ae9d-3e37f7ff8fa0", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 56 KiB/s wr, 7 op/s
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "format": "json"}]: dispatch
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e92e9c0d-aab4-453c-97fd-2dccbd1b01ca' of type subvolume
Jan 21 14:14:00 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:00.741+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e92e9c0d-aab4-453c-97fd-2dccbd1b01ca' of type subvolume
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e92e9c0d-aab4-453c-97fd-2dccbd1b01ca'' moved to trashcan
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e92e9c0d-aab4-453c-97fd-2dccbd1b01ca, vol_name:cephfs) < ""
Jan 21 14:14:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 21 14:14:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 21 14:14:01 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 21 14:14:01 compute-0 ceph-mon[75031]: pgmap v1011: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 56 KiB/s wr, 7 op/s
Jan 21 14:14:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "format": "json"}]: dispatch
Jan 21 14:14:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e92e9c0d-aab4-453c-97fd-2dccbd1b01ca", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:01 compute-0 ceph-mon[75031]: osdmap e141: 3 total, 3 up, 3 in
Jan 21 14:14:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 70 KiB/s wr, 8 op/s
Jan 21 14:14:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:14:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:03 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:14:03.674 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:14:03 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:14:03.675 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:14:04 compute-0 ceph-mon[75031]: pgmap v1013: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 70 KiB/s wr, 8 op/s
Jan 21 14:14:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 47 KiB/s wr, 5 op/s
Jan 21 14:14:04 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:14:04.677 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:14:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:14:05 compute-0 ceph-mon[75031]: pgmap v1014: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 47 KiB/s wr, 5 op/s
Jan 21 14:14:05 compute-0 podman[247504]: 2026-01-21 14:14:05.363745756 +0000 UTC m=+0.070010600 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:14:05 compute-0 podman[247503]: 2026-01-21 14:14:05.398190047 +0000 UTC m=+0.113208751 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 21 14:14:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 646 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 14:14:06 compute-0 nova_compute[239261]: 2026-01-21 14:14:06.393 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 14:14:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/8d8c9d37-265b-4f39-a819-7b0f7f9a7c1c'.
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/.meta.tmp'
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/.meta.tmp' to config b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e/.meta'
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "format": "json"}]: dispatch
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 14:14:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 14:14:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:07 compute-0 ceph-mon[75031]: pgmap v1015: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 646 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 14:14:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:14:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 7 op/s
Jan 21 14:14:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "format": "json"}]: dispatch
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "format": "json"}]: dispatch
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bc5fdaa-02e4-4394-ab57-82acdd89427e' of type subvolume
Jan 21 14:14:08 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:08.753+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bc5fdaa-02e4-4394-ab57-82acdd89427e' of type subvolume
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8bc5fdaa-02e4-4394-ab57-82acdd89427e'' moved to trashcan
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bc5fdaa-02e4-4394-ab57-82acdd89427e, vol_name:cephfs) < ""
Jan 21 14:14:09 compute-0 ceph-mon[75031]: pgmap v1016: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 7 op/s
Jan 21 14:14:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "format": "json"}]: dispatch
Jan 21 14:14:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bc5fdaa-02e4-4394-ab57-82acdd89427e", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 38 KiB/s wr, 4 op/s
Jan 21 14:14:10 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:14:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:11 compute-0 ceph-mon[75031]: pgmap v1017: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 38 KiB/s wr, 4 op/s
Jan 21 14:14:11 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/9d663342-1ee6-4020-b764-46d047183f0b'.
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp'
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp' to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta'
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "format": "json"}]: dispatch
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:11 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 34 KiB/s wr, 4 op/s
Jan 21 14:14:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "format": "json"}]: dispatch
Jan 21 14:14:13 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/e221679c-b44b-4c53-ae2f-803a06a1737b'.
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/.meta.tmp'
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/.meta.tmp' to config b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9/.meta'
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "format": "json"}]: dispatch
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:13 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fd14f2b-0487-4f6b-a678-d4c00c894fd8'' moved to trashcan
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fd14f2b-0487-4f6b-a678-d4c00c894fd8, vol_name:cephfs) < ""
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007'.
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 4 op/s
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/.meta.tmp'
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/.meta.tmp' to config b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/.meta'
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "format": "json"}]: dispatch
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:14 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3", "format": "json"}]: dispatch
Jan 21 14:14:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:14 compute-0 ceph-mon[75031]: pgmap v1018: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 34 KiB/s wr, 4 op/s
Jan 21 14:14:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "format": "json"}]: dispatch
Jan 21 14:14:14 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "format": "json"}]: dispatch
Jan 21 14:14:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fd14f2b-0487-4f6b-a678-d4c00c894fd8", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:15 compute-0 ceph-mon[75031]: pgmap v1019: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 4 op/s
Jan 21 14:14:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "format": "json"}]: dispatch
Jan 21 14:14:15 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3", "format": "json"}]: dispatch
Jan 21 14:14:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 7 op/s
Jan 21 14:14:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 5 op/s
Jan 21 14:14:18 compute-0 ceph-mon[75031]: pgmap v1020: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 7 op/s
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta.tmp' to config b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e/.meta'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09b94b8f-18fe-41bc-bc29-2dce63cc4501, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp' to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta.tmp' to config b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c/.meta'
Jan 21 14:14:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e50f506-28fa-4be1-8389-acdae8ac8ba3, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 14:14:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Jan 21 14:14:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 14:14:19 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID eve49 with tenant 42f926cfde224068a742ef536ed79928
Jan 21 14:14:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:14:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501_9e095a23-07cd-445c-a098-afd719bc8021", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "snap_name": "09b94b8f-18fe-41bc-bc29-2dce63cc4501", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: pgmap v1021: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 5 op/s
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3_550e4856-6eec-4620-8d3b-79929da3bf92", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "snap_name": "7e50f506-28fa-4be1-8389-acdae8ac8ba3", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:14:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee86be96-97ed-41e6-a8dc-978f7f6c00d9' of type subvolume
Jan 21 14:14:19 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:19.418+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee86be96-97ed-41e6-a8dc-978f7f6c00d9' of type subvolume
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee86be96-97ed-41e6-a8dc-978f7f6c00d9'' moved to trashcan
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee86be96-97ed-41e6-a8dc-978f7f6c00d9, vol_name:cephfs) < ""
Jan 21 14:14:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 10 op/s
Jan 21 14:14:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:14:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "format": "json"}]: dispatch
Jan 21 14:14:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee86be96-97ed-41e6-a8dc-978f7f6c00d9", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:14:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4784 writes, 4784 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1427 writes, 6540 keys, 1427 commit groups, 1.0 writes per commit group, ingest: 9.48 MB, 0.02 MB/s
                                           Interval WAL: 1427 writes, 1427 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     41.5      0.60              0.08        12    0.050       0      0       0.0       0.0
                                             L6      1/0    7.51 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     37.3     30.7      2.63              0.24        11    0.239     48K   5782       0.0       0.0
                                            Sum      1/0    7.51 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     30.4     32.7      3.23              0.32        23    0.140     48K   5782       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     46.3     46.7      1.00              0.14        10    0.100     24K   2590       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     37.3     30.7      2.63              0.24        11    0.239     48K   5782       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     41.7      0.59              0.08        11    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 3.2 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 304.00 MB usage: 9.34 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.0001 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(572,8.94 MB,2.93957%) FilterBlock(24,142.05 KB,0.0456308%) IndexBlock(24,274.08 KB,0.0880442%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 14:14:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:21 compute-0 ceph-mon[75031]: pgmap v1022: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 10 op/s
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "format": "json"}]: dispatch
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1442b436-f5bb-47c2-acbf-ac7903d9399e' of type subvolume
Jan 21 14:14:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:21.721+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1442b436-f5bb-47c2-acbf-ac7903d9399e' of type subvolume
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1442b436-f5bb-47c2-acbf-ac7903d9399e'' moved to trashcan
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1442b436-f5bb-47c2-acbf-ac7903d9399e, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "format": "json"}]: dispatch
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:22.159+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '398fce8f-70d1-42b2-8ff9-f180fc0fd07c' of type subvolume
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '398fce8f-70d1-42b2-8ff9-f180fc0fd07c' of type subvolume
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/398fce8f-70d1-42b2-8ff9-f180fc0fd07c'' moved to trashcan
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:398fce8f-70d1-42b2-8ff9-f180fc0fd07c, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Jan 21 14:14:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 14:14:22 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID eve48 with tenant 42f926cfde224068a742ef536ed79928
Jan 21 14:14:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:14:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:14:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 14:14:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 8 op/s
Jan 21 14:14:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 21 14:14:22 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "format": "json"}]: dispatch
Jan 21 14:14:22 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1442b436-f5bb-47c2-acbf-ac7903d9399e", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 14:14:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:14:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:14:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 21 14:14:22 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 21 14:14:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:14:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668645748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:14:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:14:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668645748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:14:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "format": "json"}]: dispatch
Jan 21 14:14:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "398fce8f-70d1-42b2-8ff9-f180fc0fd07c", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:14:23 compute-0 ceph-mon[75031]: pgmap v1023: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 61 KiB/s wr, 8 op/s
Jan 21 14:14:23 compute-0 ceph-mon[75031]: osdmap e142: 3 total, 3 up, 3 in
Jan 21 14:14:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1668645748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:14:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1668645748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:14:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 10 op/s
Jan 21 14:14:25 compute-0 ceph-mon[75031]: pgmap v1025: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 10 op/s
Jan 21 14:14:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 12 op/s
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/598a97dd-299d-4cce-beaf-351d6cc6c6de'.
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/.meta.tmp'
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/.meta.tmp' to config b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef/.meta'
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "format": "json"}]: dispatch
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:26 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Jan 21 14:14:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 14:14:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Jan 21 14:14:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Jan 21 14:14:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007
Jan 21 14:14:26 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007],prefix=session evict} (starting...)
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:14:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:27 compute-0 ceph-mon[75031]: pgmap v1026: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 12 op/s
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "format": "json"}]: dispatch
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Jan 21 14:14:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve48", "format": "json"}]: dispatch
Jan 21 14:14:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 12 op/s
Jan 21 14:14:29 compute-0 ceph-mon[75031]: pgmap v1027: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 82 KiB/s wr, 12 op/s
Jan 21 14:14:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "format": "json"}]: dispatch
Jan 21 14:14:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd111cee-cd8c-410d-afba-122eba9f97ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd111cee-cd8c-410d-afba-122eba9f97ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:30.000+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dd111cee-cd8c-410d-afba-122eba9f97ef' of type subvolume
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dd111cee-cd8c-410d-afba-122eba9f97ef' of type subvolume
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dd111cee-cd8c-410d-afba-122eba9f97ef'' moved to trashcan
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd111cee-cd8c-410d-afba-122eba9f97ef, vol_name:cephfs) < ""
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 14:14:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Jan 21 14:14:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 14:14:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID eve47 with tenant 42f926cfde224068a742ef536ed79928
Jan 21 14:14:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:14:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:14:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:14:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, tenant_id:42f926cfde224068a742ef536ed79928, vol_name:cephfs) < ""
Jan 21 14:14:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 21 14:14:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 21 14:14:31 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 21 14:14:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "format": "json"}]: dispatch
Jan 21 14:14:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd111cee-cd8c-410d-afba-122eba9f97ef", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:31 compute-0 ceph-mon[75031]: pgmap v1028: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 14:14:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "tenant_id": "42f926cfde224068a742ef536ed79928", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:14:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 14:14:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:14:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0a87ab89-25c8-43d7-9b97-672b44e8c221", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:14:31 compute-0 ceph-mon[75031]: osdmap e143: 3 total, 3 up, 3 in
Jan 21 14:14:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 719 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 14:14:33 compute-0 ceph-mon[75031]: pgmap v1030: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 719 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 14:14:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:14:33.904 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:14:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:14:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:14:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:14:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Jan 21 14:14:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 14:14:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Jan 21 14:14:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Jan 21 14:14:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007
Jan 21 14:14:34 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007],prefix=session evict} (starting...)
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:14:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:35 compute-0 ceph-mon[75031]: pgmap v1031: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 68 KiB/s wr, 9 op/s
Jan 21 14:14:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 21 14:14:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Jan 21 14:14:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Jan 21 14:14:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Jan 21 14:14:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve47", "format": "json"}]: dispatch
Jan 21 14:14:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:36 compute-0 podman[247553]: 2026-01-21 14:14:36.332493844 +0000 UTC m=+0.055246673 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:14:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 14:14:36 compute-0 podman[247552]: 2026-01-21 14:14:36.35637572 +0000 UTC m=+0.085528444 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:14:36 compute-0 sudo[247594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:14:36 compute-0 sudo[247594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:36 compute-0 sudo[247594]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:36 compute-0 sudo[247619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:14:36 compute-0 sudo[247619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "format": "json"}]: dispatch
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98584c80-dc48-400e-a1ef-b94d26420f34, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98584c80-dc48-400e-a1ef-b94d26420f34, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:37 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:37.277+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98584c80-dc48-400e-a1ef-b94d26420f34' of type subvolume
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98584c80-dc48-400e-a1ef-b94d26420f34' of type subvolume
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98584c80-dc48-400e-a1ef-b94d26420f34'' moved to trashcan
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98584c80-dc48-400e-a1ef-b94d26420f34, vol_name:cephfs) < ""
Jan 21 14:14:37 compute-0 sudo[247619]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:37 compute-0 ceph-mon[75031]: pgmap v1032: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 14:14:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:14:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:14:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:14:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:14:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:14:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:14:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:14:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:14:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:14:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:14:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:14:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:14:37 compute-0 sudo[247674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:14:37 compute-0 sudo[247674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:37 compute-0 sudo[247674]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:37 compute-0 sudo[247699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:14:37 compute-0 sudo[247699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:37 compute-0 podman[247737]: 2026-01-21 14:14:37.901261203 +0000 UTC m=+0.024627065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:14:38 compute-0 podman[247737]: 2026-01-21 14:14:38.16321237 +0000 UTC m=+0.286578212 container create b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 21 14:14:38 compute-0 systemd[1]: Started libpod-conmon-b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645.scope.
Jan 21 14:14:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:14:38 compute-0 podman[247737]: 2026-01-21 14:14:38.266325067 +0000 UTC m=+0.389691009 container init b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:14:38 compute-0 podman[247737]: 2026-01-21 14:14:38.2735369 +0000 UTC m=+0.396902742 container start b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:14:38 compute-0 podman[247737]: 2026-01-21 14:14:38.277546207 +0000 UTC m=+0.400912099 container attach b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 14:14:38 compute-0 practical_dhawan[247752]: 167 167
Jan 21 14:14:38 compute-0 systemd[1]: libpod-b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645.scope: Deactivated successfully.
Jan 21 14:14:38 compute-0 podman[247737]: 2026-01-21 14:14:38.281166434 +0000 UTC m=+0.404532286 container died b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cab275c7dfc106251facb1a3c19fe0cc118df6fdc53726744a0f6ae12abd52b-merged.mount: Deactivated successfully.
Jan 21 14:14:38 compute-0 podman[247737]: 2026-01-21 14:14:38.337347988 +0000 UTC m=+0.460713830 container remove b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:14:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 14:14:38 compute-0 systemd[1]: libpod-conmon-b605bf809c923f807bf0db0e130665e3c26c2d7f39b7e6d77a3ff608a8480645.scope: Deactivated successfully.
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "format": "json"}]: dispatch
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98584c80-dc48-400e-a1ef-b94d26420f34", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:14:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:14:38 compute-0 podman[247776]: 2026-01-21 14:14:38.555294944 +0000 UTC m=+0.106033938 container create ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 14:14:38 compute-0 podman[247776]: 2026-01-21 14:14:38.472702153 +0000 UTC m=+0.023441137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:14:38 compute-0 systemd[1]: Started libpod-conmon-ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea.scope.
Jan 21 14:14:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:38 compute-0 podman[247776]: 2026-01-21 14:14:38.819731591 +0000 UTC m=+0.370470575 container init ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:14:38 compute-0 podman[247776]: 2026-01-21 14:14:38.827357935 +0000 UTC m=+0.378096899 container start ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:14:38 compute-0 podman[247776]: 2026-01-21 14:14:38.877641097 +0000 UTC m=+0.428380051 container attach ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:14:39 compute-0 pensive_swirles[247792]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:14:39 compute-0 pensive_swirles[247792]: --> All data devices are unavailable
Jan 21 14:14:39 compute-0 systemd[1]: libpod-ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea.scope: Deactivated successfully.
Jan 21 14:14:39 compute-0 podman[247776]: 2026-01-21 14:14:39.324990305 +0000 UTC m=+0.875729259 container died ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9748c90b25aabe1f60021c88b048c1c3fb91c96fbca95b172cefb8603c4e4c61-merged.mount: Deactivated successfully.
Jan 21 14:14:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Jan 21 14:14:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 14:14:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Jan 21 14:14:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Jan 21 14:14:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Jan 21 14:14:39 compute-0 podman[247776]: 2026-01-21 14:14:39.481169881 +0000 UTC m=+1.031908825 container remove ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swirles, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 systemd[1]: libpod-conmon-ccda5bb90d134b7d5ed6ff9d5796a6b1726275d6712269bf16776b526626f1ea.scope: Deactivated successfully.
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007
Jan 21 14:14:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221/e7478f03-d463-4063-bb17-146a7fb16007],prefix=session evict} (starting...)
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 sudo[247699]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:39 compute-0 ceph-mon[75031]: pgmap v1033: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 58 KiB/s wr, 7 op/s
Jan 21 14:14:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Jan 21 14:14:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Jan 21 14:14:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Jan 21 14:14:39 compute-0 sudo[247826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:14:39 compute-0 sudo[247826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:39 compute-0 sudo[247826]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:14:39
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes']
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "format": "json"}]: dispatch
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 sudo[247851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a87ab89-25c8-43d7-9b97-672b44e8c221' of type subvolume
Jan 21 14:14:39 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:14:39.646+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0a87ab89-25c8-43d7-9b97-672b44e8c221' of type subvolume
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 sudo[247851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0a87ab89-25c8-43d7-9b97-672b44e8c221'' moved to trashcan
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:14:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0a87ab89-25c8-43d7-9b97-672b44e8c221, vol_name:cephfs) < ""
Jan 21 14:14:39 compute-0 podman[247888]: 2026-01-21 14:14:39.964160287 +0000 UTC m=+0.058680506 container create 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:14:40 compute-0 systemd[1]: Started libpod-conmon-0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b.scope.
Jan 21 14:14:40 compute-0 podman[247888]: 2026-01-21 14:14:39.936818039 +0000 UTC m=+0.031338278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:14:40 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:14:40 compute-0 podman[247888]: 2026-01-21 14:14:40.279699167 +0000 UTC m=+0.374219436 container init 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 14:14:40 compute-0 podman[247888]: 2026-01-21 14:14:40.289574855 +0000 UTC m=+0.384095104 container start 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 21 14:14:40 compute-0 elastic_taussig[247904]: 167 167
Jan 21 14:14:40 compute-0 systemd[1]: libpod-0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b.scope: Deactivated successfully.
Jan 21 14:14:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 78 KiB/s wr, 10 op/s
Jan 21 14:14:40 compute-0 podman[247888]: 2026-01-21 14:14:40.451321425 +0000 UTC m=+0.545841824 container attach 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:14:40 compute-0 podman[247888]: 2026-01-21 14:14:40.452094324 +0000 UTC m=+0.546614583 container died 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a494490>)]
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc5286a2fa0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a49bd60>)]
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:14:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 21 14:14:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "auth_id": "eve49", "format": "json"}]: dispatch
Jan 21 14:14:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "format": "json"}]: dispatch
Jan 21 14:14:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0a87ab89-25c8-43d7-9b97-672b44e8c221", "force": true, "format": "json"}]: dispatch
Jan 21 14:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-05f1c6d85f41f68562cb02a7807f33b484b403eb81222e5d3ec7d9df89fc2cb5-merged.mount: Deactivated successfully.
Jan 21 14:14:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:41 compute-0 podman[247888]: 2026-01-21 14:14:41.190787546 +0000 UTC m=+1.285307765 container remove 0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:14:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:14:41 compute-0 systemd[1]: libpod-conmon-0e201b775104f5b9434177bae592cbda6cdd96c408413ac7ffba46d7014ff75b.scope: Deactivated successfully.
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.332209527 +0000 UTC m=+0.024248646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.436248706 +0000 UTC m=+0.128287805 container create ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 14:14:41 compute-0 systemd[1]: Started libpod-conmon-ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e.scope.
Jan 21 14:14:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.520640711 +0000 UTC m=+0.212679830 container init ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.527076997 +0000 UTC m=+0.219116096 container start ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.534512256 +0000 UTC m=+0.226551375 container attach ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]: {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:     "0": [
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:         {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "devices": [
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "/dev/loop3"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             ],
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_name": "ceph_lv0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_size": "21470642176",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "name": "ceph_lv0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "tags": {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cluster_name": "ceph",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.crush_device_class": "",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.encrypted": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.objectstore": "bluestore",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osd_id": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.type": "block",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.vdo": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.with_tpm": "0"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             },
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "type": "block",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "vg_name": "ceph_vg0"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:         }
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:     ],
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:     "1": [
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:         {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "devices": [
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "/dev/loop4"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             ],
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_name": "ceph_lv1",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_size": "21470642176",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "name": "ceph_lv1",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "tags": {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cluster_name": "ceph",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.crush_device_class": "",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.encrypted": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.objectstore": "bluestore",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osd_id": "1",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.type": "block",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.vdo": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.with_tpm": "0"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             },
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "type": "block",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "vg_name": "ceph_vg1"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:         }
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:     ],
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:     "2": [
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:         {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "devices": [
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "/dev/loop5"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             ],
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_name": "ceph_lv2",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_size": "21470642176",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "name": "ceph_lv2",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "tags": {
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.cluster_name": "ceph",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.crush_device_class": "",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.encrypted": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.objectstore": "bluestore",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osd_id": "2",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.type": "block",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.vdo": "0",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:                 "ceph.with_tpm": "0"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             },
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "type": "block",
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:             "vg_name": "ceph_vg2"
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:         }
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]:     ]
Jan 21 14:14:41 compute-0 recursing_dubinsky[247943]: }
Jan 21 14:14:41 compute-0 systemd[1]: libpod-ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e.scope: Deactivated successfully.
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.835162046 +0000 UTC m=+0.527201155 container died ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f78be081f7f1a28cef63708907cd63c42af49bc25a73b8a289e0a19e3e834b6-merged.mount: Deactivated successfully.
Jan 21 14:14:41 compute-0 podman[247928]: 2026-01-21 14:14:41.875346124 +0000 UTC m=+0.567385223 container remove ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:14:41 compute-0 systemd[1]: libpod-conmon-ee506f4537fb1a8dcb3e8df4c0eb4ce45274d0aa30ab28c89b7563563cf6592e.scope: Deactivated successfully.
Jan 21 14:14:41 compute-0 sudo[247851]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:41 compute-0 sudo[247964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:14:41 compute-0 sudo[247964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:41 compute-0 sudo[247964]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:42 compute-0 sudo[247989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:14:42 compute-0 sudo[247989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:42 compute-0 ceph-mon[75031]: pgmap v1034: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 78 KiB/s wr, 10 op/s
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.332472278 +0000 UTC m=+0.037363483 container create 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 14:14:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 69 KiB/s wr, 9 op/s
Jan 21 14:14:42 compute-0 systemd[1]: Started libpod-conmon-7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41.scope.
Jan 21 14:14:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.389711828 +0000 UTC m=+0.094603083 container init 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.395841806 +0000 UTC m=+0.100733021 container start 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.399790251 +0000 UTC m=+0.104681476 container attach 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:14:42 compute-0 charming_hodgkin[248044]: 167 167
Jan 21 14:14:42 compute-0 systemd[1]: libpod-7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41.scope: Deactivated successfully.
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.401167754 +0000 UTC m=+0.106058969 container died 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.31597796 +0000 UTC m=+0.020869195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-85c1ee6c78c373f9ec9f816f4d7d68907c17ce2c3f2d35d5797481fd67501d8d-merged.mount: Deactivated successfully.
Jan 21 14:14:42 compute-0 podman[248028]: 2026-01-21 14:14:42.439742884 +0000 UTC m=+0.144634089 container remove 7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 21 14:14:42 compute-0 systemd[1]: libpod-conmon-7b012823b1b33d384b43a97fedfd635e7e5a96a9f02ff20b3b3e614466241f41.scope: Deactivated successfully.
Jan 21 14:14:42 compute-0 podman[248068]: 2026-01-21 14:14:42.597388656 +0000 UTC m=+0.033827997 container create bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 14:14:42 compute-0 systemd[1]: Started libpod-conmon-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope.
Jan 21 14:14:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:14:42 compute-0 podman[248068]: 2026-01-21 14:14:42.583784617 +0000 UTC m=+0.020223978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:14:42 compute-0 podman[248068]: 2026-01-21 14:14:42.695913291 +0000 UTC m=+0.132352642 container init bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:14:42 compute-0 podman[248068]: 2026-01-21 14:14:42.702187623 +0000 UTC m=+0.138626964 container start bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 14:14:42 compute-0 podman[248068]: 2026-01-21 14:14:42.705748378 +0000 UTC m=+0.142187719 container attach bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 14:14:42 compute-0 nova_compute[239261]: 2026-01-21 14:14:42.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:43 compute-0 ceph-mon[75031]: pgmap v1035: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 69 KiB/s wr, 9 op/s
Jan 21 14:14:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.tnwklj(active, since 30m)
Jan 21 14:14:43 compute-0 lvm[248163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:14:43 compute-0 lvm[248164]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:14:43 compute-0 lvm[248163]: VG ceph_vg0 finished
Jan 21 14:14:43 compute-0 lvm[248164]: VG ceph_vg1 finished
Jan 21 14:14:43 compute-0 lvm[248166]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:14:43 compute-0 lvm[248166]: VG ceph_vg2 finished
Jan 21 14:14:43 compute-0 lvm[248168]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:14:43 compute-0 lvm[248168]: VG ceph_vg2 finished
Jan 21 14:14:43 compute-0 agitated_meninsky[248085]: {}
Jan 21 14:14:43 compute-0 systemd[1]: libpod-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope: Deactivated successfully.
Jan 21 14:14:43 compute-0 podman[248068]: 2026-01-21 14:14:43.502952583 +0000 UTC m=+0.939391924 container died bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:14:43 compute-0 systemd[1]: libpod-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope: Consumed 1.253s CPU time.
Jan 21 14:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c05aef2b5fdbb13146fb90b6909fb552acfc59c2658df856c2d43f51af618d2f-merged.mount: Deactivated successfully.
Jan 21 14:14:43 compute-0 podman[248068]: 2026-01-21 14:14:43.547936137 +0000 UTC m=+0.984375478 container remove bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_meninsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:14:43 compute-0 systemd[1]: libpod-conmon-bcb8b3d6b6c5ef0c17f8c815bc4ccd32ebd2b29f5ecfefcbabea40e1cd85221a.scope: Deactivated successfully.
Jan 21 14:14:43 compute-0 sudo[247989]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:14:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:14:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:14:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:14:43 compute-0 sudo[248180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:14:43 compute-0 sudo[248180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:14:43 compute-0 sudo[248180]: pam_unix(sudo:session): session closed for user root
Jan 21 14:14:43 compute-0 nova_compute[239261]: 2026-01-21 14:14:43.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:43 compute-0 nova_compute[239261]: 2026-01-21 14:14:43.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:14:43 compute-0 nova_compute[239261]: 2026-01-21 14:14:43.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:14:43 compute-0 nova_compute[239261]: 2026-01-21 14:14:43.749 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:14:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 65 KiB/s wr, 8 op/s
Jan 21 14:14:44 compute-0 nova_compute[239261]: 2026-01-21 14:14:44.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:44 compute-0 nova_compute[239261]: 2026-01-21 14:14:44.767 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:14:44 compute-0 nova_compute[239261]: 2026-01-21 14:14:44.768 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:14:44 compute-0 nova_compute[239261]: 2026-01-21 14:14:44.768 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:14:44 compute-0 nova_compute[239261]: 2026-01-21 14:14:44.768 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:14:44 compute-0 nova_compute[239261]: 2026-01-21 14:14:44.769 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:14:45 compute-0 ceph-mon[75031]: mgrmap e15: compute-0.tnwklj(active, since 30m)
Jan 21 14:14:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:14:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:14:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:14:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979897544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:14:45 compute-0 nova_compute[239261]: 2026-01-21 14:14:45.655 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:14:45 compute-0 nova_compute[239261]: 2026-01-21 14:14:45.795 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:14:45 compute-0 nova_compute[239261]: 2026-01-21 14:14:45.796 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5008MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:14:45 compute-0 nova_compute[239261]: 2026-01-21 14:14:45.796 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:14:45 compute-0 nova_compute[239261]: 2026-01-21 14:14:45.797 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:14:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:46 compute-0 ceph-mon[75031]: pgmap v1036: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 65 KiB/s wr, 8 op/s
Jan 21 14:14:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3979897544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:14:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 73 KiB/s wr, 10 op/s
Jan 21 14:14:46 compute-0 nova_compute[239261]: 2026-01-21 14:14:46.683 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:14:46 compute-0 nova_compute[239261]: 2026-01-21 14:14:46.684 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:14:46 compute-0 nova_compute[239261]: 2026-01-21 14:14:46.702 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:14:47 compute-0 ceph-mon[75031]: pgmap v1037: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 73 KiB/s wr, 10 op/s
Jan 21 14:14:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:14:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099711370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:14:47 compute-0 nova_compute[239261]: 2026-01-21 14:14:47.313 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:14:47 compute-0 nova_compute[239261]: 2026-01-21 14:14:47.322 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:14:47 compute-0 nova_compute[239261]: 2026-01-21 14:14:47.386 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:14:47 compute-0 nova_compute[239261]: 2026-01-21 14:14:47.387 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:14:47 compute-0 nova_compute[239261]: 2026-01-21 14:14:47.388 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:14:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 50 KiB/s wr, 7 op/s
Jan 21 14:14:48 compute-0 nova_compute[239261]: 2026-01-21 14:14:48.389 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:48 compute-0 nova_compute[239261]: 2026-01-21 14:14:48.390 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:48 compute-0 nova_compute[239261]: 2026-01-21 14:14:48.391 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:48 compute-0 nova_compute[239261]: 2026-01-21 14:14:48.391 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:48 compute-0 nova_compute[239261]: 2026-01-21 14:14:48.392 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:14:48 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1099711370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:14:48 compute-0 nova_compute[239261]: 2026-01-21 14:14:48.726 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:49 compute-0 ceph-mon[75031]: pgmap v1038: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 50 KiB/s wr, 7 op/s
Jan 21 14:14:49 compute-0 nova_compute[239261]: 2026-01-21 14:14:49.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 8 op/s
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666210317142837 of space, bias 1.0, pg target 0.1998630951428511 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00010063828780500965 of space, bias 4.0, pg target 0.12076594536601158 quantized to 16 (current 16)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:14:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:14:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:51 compute-0 ceph-mon[75031]: pgmap v1039: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 8 op/s
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 3 op/s
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478'.
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/.meta.tmp'
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/.meta.tmp' to config b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/.meta'
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "format": "json"}]: dispatch
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:14:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:14:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:14:53 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:53 compute-0 ceph-mon[75031]: pgmap v1040: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 3 op/s
Jan 21 14:14:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "format": "json"}]: dispatch
Jan 21 14:14:53 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:14:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 3 op/s
Jan 21 14:14:55 compute-0 ceph-mon[75031]: pgmap v1041: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 3 op/s
Jan 21 14:14:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:14:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 21 KiB/s wr, 3 op/s
Jan 21 14:14:57 compute-0 ceph-mon[75031]: pgmap v1042: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 21 KiB/s wr, 3 op/s
Jan 21 14:14:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Jan 21 14:14:59 compute-0 ceph-mon[75031]: pgmap v1043: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Jan 21 14:14:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:14:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:14:59 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591'.
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/.meta.tmp'
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/.meta.tmp' to config b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/.meta'
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "format": "json"}]: dispatch
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/a3cff92f-535d-4f55-9c8d-18e6e534e3fb'.
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/.meta.tmp'
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/.meta.tmp' to config b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7/.meta'
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "format": "json"}]: dispatch
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 14:15:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 1 op/s
Jan 21 14:15:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:00 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:00 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "format": "json"}]: dispatch
Jan 21 14:15:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "format": "json"}]: dispatch
Jan 21 14:15:01 compute-0 ceph-mon[75031]: pgmap v1044: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 1 op/s
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5'.
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/.meta.tmp'
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/.meta.tmp' to config b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/.meta'
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "format": "json"}]: dispatch
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:01 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s wr, 1 op/s
Jan 21 14:15:02 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:02 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "format": "json"}]: dispatch
Jan 21 14:15:02 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:03 compute-0 ceph-mon[75031]: pgmap v1045: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s wr, 1 op/s
Jan 21 14:15:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:03 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:15:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 21 14:15:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_0b71204a-52c0-4e93-8d46-339e009b5492", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "tenant_id": "28d69e0c83c84d03bcfbc4f9c9057023", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume authorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, tenant_id:28d69e0c83c84d03bcfbc4f9c9057023, vol_name:cephfs) < ""
Jan 21 14:15:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} v 0)
Jan 21 14:15:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 14:15:05 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-483669843 with tenant 28d69e0c83c84d03bcfbc4f9c9057023
Jan 21 14:15:05 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:05 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume authorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, tenant_id:28d69e0c83c84d03bcfbc4f9c9057023, vol_name:cephfs) < ""
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "format": "json"}]: dispatch
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '74d6c6f5-e0f2-4207-b59c-99c525e6f1c7' of type subvolume
Jan 21 14:15:05 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:05.688+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '74d6c6f5-e0f2-4207-b59c-99c525e6f1c7' of type subvolume
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/74d6c6f5-e0f2-4207-b59c-99c525e6f1c7'' moved to trashcan
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:15:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:74d6c6f5-e0f2-4207-b59c-99c525e6f1c7, vol_name:cephfs) < ""
Jan 21 14:15:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:05 compute-0 ceph-mon[75031]: pgmap v1046: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 21 14:15:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 14:15:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:05 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-483669843", "caps": ["mds", "allow rw path=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_a27e07da-ba92-4d67-aa02-2edb8a28bc44", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 4 op/s
Jan 21 14:15:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "tenant_id": "28d69e0c83c84d03bcfbc4f9c9057023", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "format": "json"}]: dispatch
Jan 21 14:15:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "74d6c6f5-e0f2-4207-b59c-99c525e6f1c7", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:07 compute-0 podman[248250]: 2026-01-21 14:15:07.33772124 +0000 UTC m=+0.062286384 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 21 14:15:07 compute-0 podman[248249]: 2026-01-21 14:15:07.387444428 +0000 UTC m=+0.112651967 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 21 14:15:07 compute-0 ceph-mon[75031]: pgmap v1047: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 4 op/s
Jan 21 14:15:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s wr, 3 op/s
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume deauthorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mon[75031]: pgmap v1048: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s wr, 3 op/s
Jan 21 14:15:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} v 0)
Jan 21 14:15:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 14:15:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"} v 0)
Jan 21 14:15:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"} : dispatch
Jan 21 14:15:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"}]': finished
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume deauthorize, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume evict, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-483669843, client_metadata.root=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5
Jan 21 14:15:09 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-483669843,client_metadata.root=/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44/e23699aa-6ed1-45b2-bad4-4a8d71525ba5],prefix=session evict} (starting...)
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-483669843, format:json, prefix:fs subvolume evict, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:15:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591
Jan 21 14:15:09 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492/1459d9db-30f5-43d3-af84-146db6808591],prefix=session evict} (starting...)
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:09.698+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a27e07da-ba92-4d67-aa02-2edb8a28bc44' of type subvolume
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a27e07da-ba92-4d67-aa02-2edb8a28bc44' of type subvolume
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a27e07da-ba92-4d67-aa02-2edb8a28bc44'' moved to trashcan
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a27e07da-ba92-4d67-aa02-2edb8a28bc44, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0b71204a-52c0-4e93-8d46-339e009b5492, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0b71204a-52c0-4e93-8d46-339e009b5492, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:09.771+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b71204a-52c0-4e93-8d46-339e009b5492' of type subvolume
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b71204a-52c0-4e93-8d46-339e009b5492' of type subvolume
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0b71204a-52c0-4e93-8d46-339e009b5492'' moved to trashcan
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:15:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b71204a-52c0-4e93-8d46-339e009b5492, vol_name:cephfs) < ""
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-483669843", "format": "json"} : dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"} : dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-483669843"}]': finished
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "auth_id": "tempest-cephx-id-483669843", "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a27e07da-ba92-4d67-aa02-2edb8a28bc44", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0b71204a-52c0-4e93-8d46-339e009b5492", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 KiB/s wr, 9 op/s
Jan 21 14:15:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:15:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:15:11 compute-0 ceph-mon[75031]: pgmap v1049: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 KiB/s wr, 9 op/s
Jan 21 14:15:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:15:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:15:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:15:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:15:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 82 KiB/s wr, 8 op/s
Jan 21 14:15:13 compute-0 ceph-mon[75031]: pgmap v1050: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 82 KiB/s wr, 8 op/s
Jan 21 14:15:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 82 KiB/s wr, 9 op/s
Jan 21 14:15:15 compute-0 ceph-mon[75031]: pgmap v1051: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 82 KiB/s wr, 9 op/s
Jan 21 14:15:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 99 KiB/s wr, 12 op/s
Jan 21 14:15:16 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:15:16.736 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:15:16 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:15:16.737 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:15:16 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:15:16.738 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:15:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:17 compute-0 ceph-mon[75031]: pgmap v1052: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 99 KiB/s wr, 12 op/s
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b'.
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/.meta.tmp'
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/.meta.tmp' to config b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/.meta'
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "format": "json"}]: dispatch
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:18 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 63 KiB/s wr, 8 op/s
Jan 21 14:15:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "format": "json"}]: dispatch
Jan 21 14:15:18 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:18 compute-0 ceph-mon[75031]: pgmap v1053: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 63 KiB/s wr, 8 op/s
Jan 21 14:15:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 70 KiB/s wr, 9 op/s
Jan 21 14:15:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:21 compute-0 ceph-mon[75031]: pgmap v1054: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 70 KiB/s wr, 9 op/s
Jan 21 14:15:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:15:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:22 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:15:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:15:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/273413856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:15:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:15:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/273413856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:15:23 compute-0 ceph-mon[75031]: pgmap v1055: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:15:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c804cc91-0101-4131-a680-b760e9df84f1", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/273413856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:15:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/273413856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:15:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:15:25 compute-0 ceph-mon[75031]: pgmap v1056: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 24 KiB/s wr, 4 op/s
Jan 21 14:15:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:15:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 35 KiB/s wr, 5 op/s
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b
Jan 21 14:15:26 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1/5e654857-70e7-450c-a3f9-033cf187753b],prefix=session evict} (starting...)
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c804cc91-0101-4131-a680-b760e9df84f1", "format": "json"}]: dispatch
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c804cc91-0101-4131-a680-b760e9df84f1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c804cc91-0101-4131-a680-b760e9df84f1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:26.708+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c804cc91-0101-4131-a680-b760e9df84f1' of type subvolume
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c804cc91-0101-4131-a680-b760e9df84f1' of type subvolume
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c804cc91-0101-4131-a680-b760e9df84f1'' moved to trashcan
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:15:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c804cc91-0101-4131-a680-b760e9df84f1, vol_name:cephfs) < ""
Jan 21 14:15:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:27 compute-0 ceph-mon[75031]: pgmap v1057: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 35 KiB/s wr, 5 op/s
Jan 21 14:15:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c804cc91-0101-4131-a680-b760e9df84f1", "format": "json"}]: dispatch
Jan 21 14:15:27 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c804cc91-0101-4131-a680-b760e9df84f1", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:15:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7005 writes, 27K keys, 7005 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7005 writes, 1473 syncs, 4.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1137 writes, 2477 keys, 1137 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s
                                           Interval WAL: 1137 writes, 463 syncs, 2.46 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:15:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 2 op/s
Jan 21 14:15:29 compute-0 ceph-mon[75031]: pgmap v1058: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 2 op/s
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac'.
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/.meta.tmp'
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/.meta.tmp' to config b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/.meta'
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "format": "json"}]: dispatch
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 14:15:30 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:30 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "format": "json"}]: dispatch
Jan 21 14:15:30 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:31 compute-0 ceph-mon[75031]: pgmap v1059: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 14:15:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Jan 21 14:15:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:15:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2829 syncs, 3.68 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3207 writes, 10K keys, 3207 commit groups, 1.0 writes per commit group, ingest: 13.35 MB, 0.02 MB/s
                                           Interval WAL: 3207 writes, 1398 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:15:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:33 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:15:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:33 compute-0 ceph-mon[75031]: pgmap v1060: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 4 op/s
Jan 21 14:15:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5991b6dd-3598-462c-9b52-78412a23786c", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:15:33.905 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:15:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:15:33.906 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:15:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:15:33.907 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:15:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 5 op/s
Jan 21 14:15:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:36 compute-0 ceph-mon[75031]: pgmap v1061: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 42 KiB/s wr, 5 op/s
Jan 21 14:15:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 6 op/s
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:15:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6974 writes, 26K keys, 6974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6974 writes, 1420 syncs, 4.91 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1254 writes, 2803 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 1.29 MB, 0.00 MB/s
                                           Interval WAL: 1254 writes, 494 syncs, 2.54 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:15:37 compute-0 ceph-mon[75031]: pgmap v1062: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 6 op/s
Jan 21 14:15:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:15:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac
Jan 21 14:15:37 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c/1191fcfe-399e-45fb-be1f-5e25d8e752ac],prefix=session evict} (starting...)
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575'.
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/.meta.tmp'
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/.meta.tmp' to config b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/.meta'
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "format": "json"}]: dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5991b6dd-3598-462c-9b52-78412a23786c", "format": "json"}]: dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5991b6dd-3598-462c-9b52-78412a23786c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5991b6dd-3598-462c-9b52-78412a23786c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5991b6dd-3598-462c-9b52-78412a23786c' of type subvolume
Jan 21 14:15:37 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:37.819+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5991b6dd-3598-462c-9b52-78412a23786c' of type subvolume
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5991b6dd-3598-462c-9b52-78412a23786c'' moved to trashcan
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:15:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5991b6dd-3598-462c-9b52-78412a23786c, vol_name:cephfs) < ""
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "format": "json"}]: dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5991b6dd-3598-462c-9b52-78412a23786c", "format": "json"}]: dispatch
Jan 21 14:15:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5991b6dd-3598-462c-9b52-78412a23786c", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:38 compute-0 podman[248296]: 2026-01-21 14:15:38.352842453 +0000 UTC m=+0.070940791 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 21 14:15:38 compute-0 podman[248295]: 2026-01-21 14:15:38.367504337 +0000 UTC m=+0.092001650 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 21 14:15:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 5 op/s
Jan 21 14:15:39 compute-0 ceph-mon[75031]: pgmap v1063: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 5 op/s
Jan 21 14:15:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:15:39
Jan 21 14:15:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:15:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:15:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'images']
Jan 21 14:15:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:15:40 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 14:15:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 74 KiB/s wr, 8 op/s
Jan 21 14:15:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:40 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:15:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:40 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:15:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:15:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:41 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:15:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:41 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:15:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:15:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:15:41 compute-0 ceph-mon[75031]: pgmap v1064: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 74 KiB/s wr, 8 op/s
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:41 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 4 op/s
Jan 21 14:15:42 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:42 compute-0 nova_compute[239261]: 2026-01-21 14:15:42.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:43 compute-0 ceph-mon[75031]: pgmap v1065: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 4 op/s
Jan 21 14:15:43 compute-0 sudo[248340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:15:43 compute-0 sudo[248340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:43 compute-0 sudo[248340]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:43 compute-0 sudo[248365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:15:43 compute-0 sudo[248365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 5 op/s
Jan 21 14:15:44 compute-0 sudo[248365]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:44 compute-0 sudo[248421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:15:44 compute-0 sudo[248421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:44 compute-0 sudo[248421]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:44 compute-0 sudo[248446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 21 14:15:44 compute-0 sudo[248446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:44 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:44 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:44 compute-0 nova_compute[239261]: 2026-01-21 14:15:44.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:44 compute-0 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:15:44 compute-0 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:15:44 compute-0 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:15:44 compute-0 nova_compute[239261]: 2026-01-21 14:15:44.757 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:15:44 compute-0 nova_compute[239261]: 2026-01-21 14:15:44.758 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:15:44 compute-0 sudo[248446]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:15:45 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 sudo[248511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:15:45 compute-0 sudo[248511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:45 compute-0 sudo[248511]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 sudo[248537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- inventory --format=json-pretty --filter-for-batch
Jan 21 14:15:45 compute-0 sudo[248537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:15:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568423292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 14:15:45 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.309 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:15:45 compute-0 ceph-mon[75031]: pgmap v1066: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 5 op/s
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:45 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3568423292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.495 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.496 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5067MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.497 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.497 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.546598214 +0000 UTC m=+0.039669347 container create 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.553 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.555 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:15:45 compute-0 nova_compute[239261]: 2026-01-21 14:15:45.582 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:15:45 compute-0 systemd[1]: Started libpod-conmon-720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99.scope.
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.528342044 +0000 UTC m=+0.021413197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.645064619 +0000 UTC m=+0.138135772 container init 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.65341178 +0000 UTC m=+0.146482913 container start 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.657012557 +0000 UTC m=+0.150083690 container attach 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:15:45 compute-0 frosty_gould[248595]: 167 167
Jan 21 14:15:45 compute-0 systemd[1]: libpod-720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99.scope: Deactivated successfully.
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.66129568 +0000 UTC m=+0.154366813 container died 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 14:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6f622c086715d30fb2fc856b687a5f5d9d692e0bb479093cd864b8007654a09-merged.mount: Deactivated successfully.
Jan 21 14:15:45 compute-0 podman[248578]: 2026-01-21 14:15:45.714896842 +0000 UTC m=+0.207967975 container remove 720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:15:45 compute-0 systemd[1]: libpod-conmon-720fff8434892e54cf0aa162747d4d596791edae3da99abfb2c81f002fbf9f99.scope: Deactivated successfully.
Jan 21 14:15:45 compute-0 podman[248638]: 2026-01-21 14:15:45.879269367 +0000 UTC m=+0.044192357 container create 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:15:45 compute-0 systemd[1]: Started libpod-conmon-46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee.scope.
Jan 21 14:15:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:45 compute-0 podman[248638]: 2026-01-21 14:15:45.859418588 +0000 UTC m=+0.024341598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:45 compute-0 podman[248638]: 2026-01-21 14:15:45.994245769 +0000 UTC m=+0.159168799 container init 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 21 14:15:46 compute-0 podman[248638]: 2026-01-21 14:15:46.00052143 +0000 UTC m=+0.165444480 container start 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 14:15:46 compute-0 podman[248638]: 2026-01-21 14:15:46.004396524 +0000 UTC m=+0.169319554 container attach 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253504909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:15:46 compute-0 nova_compute[239261]: 2026-01-21 14:15:46.177 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:15:46 compute-0 nova_compute[239261]: 2026-01-21 14:15:46.184 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:46 compute-0 nova_compute[239261]: 2026-01-21 14:15:46.204 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:15:46 compute-0 nova_compute[239261]: 2026-01-21 14:15:46.205 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:15:46 compute-0 nova_compute[239261]: 2026-01-21 14:15:46.206 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:15:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 8 op/s
Jan 21 14:15:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3253504909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:15:46 compute-0 loving_curie[248655]: [
Jan 21 14:15:46 compute-0 loving_curie[248655]:     {
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "available": false,
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "being_replaced": false,
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "ceph_device_lvm": false,
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "lsm_data": {},
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "lvs": [],
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "path": "/dev/sr0",
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "rejected_reasons": [
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "Insufficient space (<5GB)",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "Has a FileSystem"
Jan 21 14:15:46 compute-0 loving_curie[248655]:         ],
Jan 21 14:15:46 compute-0 loving_curie[248655]:         "sys_api": {
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "actuators": null,
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "device_nodes": [
Jan 21 14:15:46 compute-0 loving_curie[248655]:                 "sr0"
Jan 21 14:15:46 compute-0 loving_curie[248655]:             ],
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "devname": "sr0",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "human_readable_size": "482.00 KB",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "id_bus": "ata",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "model": "QEMU DVD-ROM",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "nr_requests": "2",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "parent": "/dev/sr0",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "partitions": {},
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "path": "/dev/sr0",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "removable": "1",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "rev": "2.5+",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "ro": "0",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "rotational": "1",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "sas_address": "",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "sas_device_handle": "",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "scheduler_mode": "mq-deadline",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "sectors": 0,
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "sectorsize": "2048",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "size": 493568.0,
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "support_discard": "2048",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "type": "disk",
Jan 21 14:15:46 compute-0 loving_curie[248655]:             "vendor": "QEMU"
Jan 21 14:15:46 compute-0 loving_curie[248655]:         }
Jan 21 14:15:46 compute-0 loving_curie[248655]:     }
Jan 21 14:15:46 compute-0 loving_curie[248655]: ]
Jan 21 14:15:46 compute-0 systemd[1]: libpod-46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee.scope: Deactivated successfully.
Jan 21 14:15:46 compute-0 podman[248638]: 2026-01-21 14:15:46.592509045 +0000 UTC m=+0.757432035 container died 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 21 14:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1833533c4d5ba0a2fbe5830267245ec1bcd3bda8e142249f552aeb3a9da30b59-merged.mount: Deactivated successfully.
Jan 21 14:15:46 compute-0 podman[248638]: 2026-01-21 14:15:46.635738938 +0000 UTC m=+0.800661928 container remove 46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:15:46 compute-0 systemd[1]: libpod-conmon-46684d20588a4f816212b0e321b2769b5c374fad377e309e872e95022b3327ee.scope: Deactivated successfully.
Jan 21 14:15:46 compute-0 sudo[248537]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:15:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:15:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:15:46 compute-0 sudo[249521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:15:46 compute-0 sudo[249521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:46 compute-0 sudo[249521]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:46 compute-0 sudo[249546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:15:46 compute-0 sudo[249546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.208 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.208 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.208 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.117405243 +0000 UTC m=+0.024130963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.227 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.229 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.229 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.266483178 +0000 UTC m=+0.173208908 container create d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:15:47 compute-0 systemd[1]: Started libpod-conmon-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope.
Jan 21 14:15:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.413131524 +0000 UTC m=+0.319857244 container init d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.421438944 +0000 UTC m=+0.328164634 container start d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.425179025 +0000 UTC m=+0.331904735 container attach d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:15:47 compute-0 systemd[1]: libpod-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope: Deactivated successfully.
Jan 21 14:15:47 compute-0 condescending_boyd[249599]: 167 167
Jan 21 14:15:47 compute-0 conmon[249599]: conmon d1b687ed415b3d18b261 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope/container/memory.events
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.429044258 +0000 UTC m=+0.335769968 container died d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a76bf51af1f2e75f4d3d2f88e393408b344c78a11294a9f73ecaaf47b6798d6-merged.mount: Deactivated successfully.
Jan 21 14:15:47 compute-0 podman[249583]: 2026-01-21 14:15:47.47266795 +0000 UTC m=+0.379393690 container remove d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:15:47 compute-0 systemd[1]: libpod-conmon-d1b687ed415b3d18b261c1d2a8fd9e1ca41448e00a658020e85b129286712882.scope: Deactivated successfully.
Jan 21 14:15:47 compute-0 podman[249623]: 2026-01-21 14:15:47.682757666 +0000 UTC m=+0.043578562 container create 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 21 14:15:47 compute-0 ceph-mon[75031]: pgmap v1067: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 8 op/s
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:15:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:47 compute-0 systemd[1]: Started libpod-conmon-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope.
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.746 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:47 compute-0 nova_compute[239261]: 2026-01-21 14:15:47.746 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:15:47 compute-0 podman[249623]: 2026-01-21 14:15:47.661706638 +0000 UTC m=+0.022527554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:47 compute-0 podman[249623]: 2026-01-21 14:15:47.781616119 +0000 UTC m=+0.142437035 container init 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 14:15:47 compute-0 podman[249623]: 2026-01-21 14:15:47.79030786 +0000 UTC m=+0.151128756 container start 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 21 14:15:47 compute-0 podman[249623]: 2026-01-21 14:15:47.79366952 +0000 UTC m=+0.154490436 container attach 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:15:48 compute-0 brave_hopper[249639]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:15:48 compute-0 brave_hopper[249639]: --> All data devices are unavailable
Jan 21 14:15:48 compute-0 systemd[1]: libpod-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope: Deactivated successfully.
Jan 21 14:15:48 compute-0 conmon[249639]: conmon 219de9d00ffeefe621e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope/container/memory.events
Jan 21 14:15:48 compute-0 podman[249623]: 2026-01-21 14:15:48.263955591 +0000 UTC m=+0.624776497 container died 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 14:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b5803f9c838c1255dc8ee07aaff778787377a7b4da61ebc8bf02e4842ef23b3-merged.mount: Deactivated successfully.
Jan 21 14:15:48 compute-0 podman[249623]: 2026-01-21 14:15:48.311008516 +0000 UTC m=+0.671829422 container remove 219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 21 14:15:48 compute-0 systemd[1]: libpod-conmon-219de9d00ffeefe621e9fa927fe50920d099ec44b97d40d0a7805eb82a52bcb9.scope: Deactivated successfully.
Jan 21 14:15:48 compute-0 sudo[249546]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 6 op/s
Jan 21 14:15:48 compute-0 sudo[249670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:15:48 compute-0 sudo[249670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:48 compute-0 sudo[249670]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:48 compute-0 sudo[249695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:15:48 compute-0 sudo[249695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:15:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:15:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:48 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:15:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:15:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:48 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:48 compute-0 nova_compute[239261]: 2026-01-21 14:15:48.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.797368003 +0000 UTC m=+0.048698625 container create 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 21 14:15:48 compute-0 systemd[1]: Started libpod-conmon-06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6.scope.
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.775647979 +0000 UTC m=+0.026978581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:48 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:15:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.898697407 +0000 UTC m=+0.150027989 container init 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.907409477 +0000 UTC m=+0.158740069 container start 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.912810047 +0000 UTC m=+0.164140629 container attach 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:15:48 compute-0 optimistic_wozniak[249749]: 167 167
Jan 21 14:15:48 compute-0 systemd[1]: libpod-06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6.scope: Deactivated successfully.
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.91500155 +0000 UTC m=+0.166332142 container died 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:15:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c7aa70af6f5ad17b6eac1be8856f5dcea0ba32467ef0b48208228a7c379275e-merged.mount: Deactivated successfully.
Jan 21 14:15:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:48 compute-0 podman[249733]: 2026-01-21 14:15:48.962156077 +0000 UTC m=+0.213486659 container remove 06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wozniak, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:15:48 compute-0 systemd[1]: libpod-conmon-06eadfc3e59136bde115232b94ac6434e509e3a9c3137ca0e30cc4066194dda6.scope: Deactivated successfully.
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.182935501 +0000 UTC m=+0.078463543 container create 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:15:49 compute-0 systemd[1]: Started libpod-conmon-017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5.scope.
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.151419181 +0000 UTC m=+0.046947313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.272828709 +0000 UTC m=+0.168356761 container init 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.281839076 +0000 UTC m=+0.177367108 container start 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.285533945 +0000 UTC m=+0.181061997 container attach 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 21 14:15:49 compute-0 jolly_khorana[249790]: {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:     "0": [
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:         {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "devices": [
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "/dev/loop3"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             ],
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_name": "ceph_lv0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_size": "21470642176",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "name": "ceph_lv0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "tags": {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cluster_name": "ceph",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.crush_device_class": "",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.encrypted": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.objectstore": "bluestore",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osd_id": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.type": "block",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.vdo": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.with_tpm": "0"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             },
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "type": "block",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "vg_name": "ceph_vg0"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:         }
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:     ],
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:     "1": [
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:         {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "devices": [
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "/dev/loop4"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             ],
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_name": "ceph_lv1",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_size": "21470642176",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "name": "ceph_lv1",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "tags": {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cluster_name": "ceph",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.crush_device_class": "",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.encrypted": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.objectstore": "bluestore",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osd_id": "1",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.type": "block",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.vdo": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.with_tpm": "0"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             },
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "type": "block",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "vg_name": "ceph_vg1"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:         }
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:     ],
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:     "2": [
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:         {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "devices": [
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "/dev/loop5"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             ],
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_name": "ceph_lv2",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_size": "21470642176",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "name": "ceph_lv2",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "tags": {
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.cluster_name": "ceph",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.crush_device_class": "",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.encrypted": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.objectstore": "bluestore",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osd_id": "2",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.type": "block",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.vdo": "0",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:                 "ceph.with_tpm": "0"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             },
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "type": "block",
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:             "vg_name": "ceph_vg2"
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:         }
Jan 21 14:15:49 compute-0 jolly_khorana[249790]:     ]
Jan 21 14:15:49 compute-0 jolly_khorana[249790]: }
Jan 21 14:15:49 compute-0 systemd[1]: libpod-017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5.scope: Deactivated successfully.
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.600308405 +0000 UTC m=+0.495836467 container died 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 14:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe78d61e6c2b68927ac2a9a9bff7000262d4d1c624ff4540f0125406f17dc876-merged.mount: Deactivated successfully.
Jan 21 14:15:49 compute-0 podman[249774]: 2026-01-21 14:15:49.656036639 +0000 UTC m=+0.551564671 container remove 017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:15:49 compute-0 systemd[1]: libpod-conmon-017d704e3dbf3d505faf72a07056d3156788f1e3906c65b08d192ea3531849b5.scope: Deactivated successfully.
Jan 21 14:15:49 compute-0 sudo[249695]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:49 compute-0 ceph-mon[75031]: pgmap v1068: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 6 op/s
Jan 21 14:15:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:15:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:49 compute-0 sudo[249811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:15:49 compute-0 sudo[249811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:49 compute-0 sudo[249811]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:49 compute-0 sudo[249836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:15:49 compute-0 sudo[249836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/742bffda-1701-416d-826e-80b5efe59ac3'.
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/.meta.tmp'
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/.meta.tmp' to config b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f/.meta'
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "format": "json"}]: dispatch
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.217369585 +0000 UTC m=+0.068735488 container create 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 14:15:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:50 compute-0 systemd[1]: Started libpod-conmon-3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368.scope.
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.182956736 +0000 UTC m=+0.034322729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:50 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.290457488 +0000 UTC m=+0.141823411 container init 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.300124291 +0000 UTC m=+0.151490194 container start 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.303772609 +0000 UTC m=+0.155138502 container attach 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 14:15:50 compute-0 crazy_swanson[249889]: 167 167
Jan 21 14:15:50 compute-0 systemd[1]: libpod-3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368.scope: Deactivated successfully.
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.307068328 +0000 UTC m=+0.158434231 container died 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-64e739a2f13a030e0e3313ee8871678e23b0dfa9d3e1d228c06602f171c2f725-merged.mount: Deactivated successfully.
Jan 21 14:15:50 compute-0 podman[249873]: 2026-01-21 14:15:50.345012633 +0000 UTC m=+0.196378536 container remove 3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 21 14:15:50 compute-0 systemd[1]: libpod-conmon-3f36a118508cf487e705a8eb7ebe38959ecc6048c7e57be414fe16add179a368.scope: Deactivated successfully.
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 110 KiB/s wr, 14 op/s
Jan 21 14:15:50 compute-0 podman[249913]: 2026-01-21 14:15:50.539262217 +0000 UTC m=+0.040310353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666210317142837 of space, bias 1.0, pg target 0.1998630951428511 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00015084340003052672 of space, bias 4.0, pg target 0.18101208003663208 quantized to 16 (current 16)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:15:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:15:50 compute-0 podman[249913]: 2026-01-21 14:15:50.823834359 +0000 UTC m=+0.324882515 container create 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:15:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:51 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:51 compute-0 systemd[1]: Started libpod-conmon-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope.
Jan 21 14:15:51 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:15:51 compute-0 podman[249913]: 2026-01-21 14:15:51.264421424 +0000 UTC m=+0.765469570 container init 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 14:15:51 compute-0 podman[249913]: 2026-01-21 14:15:51.273313268 +0000 UTC m=+0.774361404 container start 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 14:15:51 compute-0 podman[249913]: 2026-01-21 14:15:51.302519043 +0000 UTC m=+0.803567359 container attach 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:15:51 compute-0 nova_compute[239261]: 2026-01-21 14:15:51.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:15:51 compute-0 lvm[250011]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:15:51 compute-0 lvm[250008]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:15:51 compute-0 lvm[250009]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:15:51 compute-0 lvm[250011]: VG ceph_vg2 finished
Jan 21 14:15:51 compute-0 lvm[250008]: VG ceph_vg0 finished
Jan 21 14:15:51 compute-0 lvm[250009]: VG ceph_vg1 finished
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 stoic_keldysh[249930]: {}
Jan 21 14:15:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:15:52 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 systemd[1]: libpod-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope: Deactivated successfully.
Jan 21 14:15:52 compute-0 podman[249913]: 2026-01-21 14:15:52.092914843 +0000 UTC m=+1.593962959 container died 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 21 14:15:52 compute-0 systemd[1]: libpod-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope: Consumed 1.428s CPU time.
Jan 21 14:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3ec72c5c1177c1e6f4efb69ce520a89a2ebf71d934abde43829b915c7b3aa35-merged.mount: Deactivated successfully.
Jan 21 14:15:52 compute-0 podman[249913]: 2026-01-21 14:15:52.144943047 +0000 UTC m=+1.645991173 container remove 55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:15:52 compute-0 systemd[1]: libpod-conmon-55a7f0882ee864c334fd8d1110535bcf8a9c82ab2ab8248b6c7a2f7d09d60a73.scope: Deactivated successfully.
Jan 21 14:15:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "format": "json"}]: dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: pgmap v1069: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 110 KiB/s wr, 14 op/s
Jan 21 14:15:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:15:52 compute-0 sudo[249836]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 sudo[250026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 14:15:52 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:52 compute-0 sudo[250026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:52 compute-0 sudo[250026]: pam_unix(sudo:session): session closed for user root
Jan 21 14:15:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 11 op/s
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:53 compute-0 ceph-mon[75031]: pgmap v1070: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 11 op/s
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/40891370-3f9d-46b0-aed4-8e174a61d9cd'.
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/.meta.tmp'
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/.meta.tmp' to config b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6/.meta'
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "format": "json"}]: dispatch
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 14:15:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 14:15:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:15:53 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:15:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "format": "json"}]: dispatch
Jan 21 14:15:54 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:15:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 11 op/s
Jan 21 14:15:55 compute-0 ceph-mon[75031]: pgmap v1071: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 81 KiB/s wr, 11 op/s
Jan 21 14:15:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:15:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:15:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:15:55 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:15:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:15:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:55 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:15:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:15:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:55 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:15:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:15:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:15:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 117 KiB/s wr, 15 op/s
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "format": "json"}]: dispatch
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12981b05-fe8f-4fd8-aee2-ae12c976e9f6' of type subvolume
Jan 21 14:15:57 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:15:57.310+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12981b05-fe8f-4fd8-aee2-ae12c976e9f6' of type subvolume
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/12981b05-fe8f-4fd8-aee2-ae12c976e9f6'' moved to trashcan
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:15:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12981b05-fe8f-4fd8-aee2-ae12c976e9f6, vol_name:cephfs) < ""
Jan 21 14:15:57 compute-0 ceph-mon[75031]: pgmap v1072: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 117 KiB/s wr, 15 op/s
Jan 21 14:15:58 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "format": "json"}]: dispatch
Jan 21 14:15:58 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "12981b05-fe8f-4fd8-aee2-ae12c976e9f6", "force": true, "format": "json"}]: dispatch
Jan 21 14:15:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 98 KiB/s wr, 12 op/s
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:15:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:15:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:15:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:15:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:15:59 compute-0 ceph-mon[75031]: pgmap v1073: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 98 KiB/s wr, 12 op/s
Jan 21 14:15:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:15:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:15:59 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:15:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:15:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:15:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:15:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 14:15:59 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:15:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:16:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 152 KiB/s wr, 19 op/s
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "49d5247d-28e2-437f-92c4-34b98896805f", "format": "json"}]: dispatch
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:49d5247d-28e2-437f-92c4-34b98896805f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:49d5247d-28e2-437f-92c4-34b98896805f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49d5247d-28e2-437f-92c4-34b98896805f' of type subvolume
Jan 21 14:16:00 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:00.811+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '49d5247d-28e2-437f-92c4-34b98896805f' of type subvolume
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/49d5247d-28e2-437f-92c4-34b98896805f'' moved to trashcan
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:49d5247d-28e2-437f-92c4-34b98896805f, vol_name:cephfs) < ""
Jan 21 14:16:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:02 compute-0 ceph-mon[75031]: pgmap v1074: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 152 KiB/s wr, 19 op/s
Jan 21 14:16:02 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "49d5247d-28e2-437f-92c4-34b98896805f", "format": "json"}]: dispatch
Jan 21 14:16:02 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "49d5247d-28e2-437f-92c4-34b98896805f", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 11 op/s
Jan 21 14:16:03 compute-0 ceph-mon[75031]: pgmap v1075: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 11 op/s
Jan 21 14:16:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:16:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:03 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:16:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:16:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:16:03 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-1440431664 with tenant a226ad4df79b48a2b4c6ddc1ed2cb474
Jan 21 14:16:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume authorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, tenant_id:a226ad4df79b48a2b4c6ddc1ed2cb474, vol_name:cephfs) < ""
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "tenant_id": "a226ad4df79b48a2b4c6ddc1ed2cb474", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1440431664", "caps": ["mds", "allow rw path=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cb5ab99b-0e59-4153-829e-95580fc1cdff", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 12 op/s
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/f099d9d8-babe-4015-821a-199cd77c6934'.
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/.meta.tmp'
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/.meta.tmp' to config b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31/.meta'
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "format": "json"}]: dispatch
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 14:16:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 14:16:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:04 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:05 compute-0 ceph-mon[75031]: pgmap v1076: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 12 op/s
Jan 21 14:16:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "format": "json"}]: dispatch
Jan 21 14:16:05 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 144 KiB/s wr, 18 op/s
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:16:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:16:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:16:07 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} v 0)
Jan 21 14:16:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} v 0)
Jan 21 14:16:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume deauthorize, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1440431664, client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478
Jan 21 14:16:07 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-1440431664,client_metadata.root=/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff/d1ede643-cc11-4a46-837a-818a7e57f478],prefix=session evict} (starting...)
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1440431664, format:json, prefix:fs subvolume evict, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:07 compute-0 ceph-mon[75031]: pgmap v1077: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 144 KiB/s wr, 18 op/s
Jan 21 14:16:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:16:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1440431664", "format": "json"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"} : dispatch
Jan 21 14:16:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1440431664"}]': finished
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 107 KiB/s wr, 14 op/s
Jan 21 14:16:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:16:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "auth_id": "tempest-cephx-id-1440431664", "format": "json"}]: dispatch
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "format": "json"}]: dispatch
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:08 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:08.591+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62ba25ce-388a-4a45-8f6e-7a5833c81f31' of type subvolume
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62ba25ce-388a-4a45-8f6e-7a5833c81f31' of type subvolume
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/62ba25ce-388a-4a45-8f6e-7a5833c81f31'' moved to trashcan
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62ba25ce-388a-4a45-8f6e-7a5833c81f31, vol_name:cephfs) < ""
Jan 21 14:16:09 compute-0 podman[250057]: 2026-01-21 14:16:09.34321212 +0000 UTC m=+0.063730037 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 21 14:16:09 compute-0 podman[250056]: 2026-01-21 14:16:09.380675314 +0000 UTC m=+0.101192301 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:16:09 compute-0 ceph-mon[75031]: pgmap v1078: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 107 KiB/s wr, 14 op/s
Jan 21 14:16:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "format": "json"}]: dispatch
Jan 21 14:16:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62ba25ce-388a-4a45-8f6e-7a5833c81f31", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 145 KiB/s wr, 18 op/s
Jan 21 14:16:10 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:16:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:10 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:16:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:16:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:16:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:16:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:16:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:16:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:11 compute-0 ceph-mon[75031]: pgmap v1079: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 145 KiB/s wr, 18 op/s
Jan 21 14:16:11 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "format": "json"}]: dispatch
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:12 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:12.109+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb5ab99b-0e59-4153-829e-95580fc1cdff' of type subvolume
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cb5ab99b-0e59-4153-829e-95580fc1cdff' of type subvolume
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cb5ab99b-0e59-4153-829e-95580fc1cdff'' moved to trashcan
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cb5ab99b-0e59-4153-829e-95580fc1cdff, vol_name:cephfs) < ""
Jan 21 14:16:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 91 KiB/s wr, 11 op/s
Jan 21 14:16:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "format": "json"}]: dispatch
Jan 21 14:16:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cb5ab99b-0e59-4153-829e-95580fc1cdff", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:13 compute-0 ceph-mon[75031]: pgmap v1080: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 91 KiB/s wr, 11 op/s
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 92 KiB/s wr, 12 op/s
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:16:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:16:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:16:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.767454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004974767493, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2338, "num_deletes": 253, "total_data_size": 3372699, "memory_usage": 3424048, "flush_reason": "Manual Compaction"}
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:16:14 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004974943721, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3291406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21129, "largest_seqno": 23466, "table_properties": {"data_size": 3280903, "index_size": 6485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24917, "raw_average_key_size": 21, "raw_value_size": 3258565, "raw_average_value_size": 2782, "num_data_blocks": 288, "num_entries": 1171, "num_filter_entries": 1171, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004809, "oldest_key_time": 1769004809, "file_creation_time": 1769004974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 176369 microseconds, and 8542 cpu microseconds.
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.943810) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3291406 bytes OK
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.943845) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.948927) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.948964) EVENT_LOG_v1 {"time_micros": 1769004974948954, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.948992) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3362152, prev total WAL file size 3362152, number of live WAL files 2.
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.951085) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3214KB)], [50(7694KB)]
Jan 21 14:16:14 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004974951157, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11170669, "oldest_snapshot_seqno": -1}
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5106 keys, 9386170 bytes, temperature: kUnknown
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004975088689, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9386170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9349904, "index_size": 22396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 125716, "raw_average_key_size": 24, "raw_value_size": 9255917, "raw_average_value_size": 1812, "num_data_blocks": 937, "num_entries": 5106, "num_filter_entries": 5106, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769004974, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.089679) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9386170 bytes
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.091579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.1 rd, 68.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 5635, records dropped: 529 output_compression: NoCompression
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.091601) EVENT_LOG_v1 {"time_micros": 1769004975091591, "job": 26, "event": "compaction_finished", "compaction_time_micros": 137702, "compaction_time_cpu_micros": 32317, "output_level": 6, "num_output_files": 1, "total_output_size": 9386170, "num_input_records": 5635, "num_output_records": 5106, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004975092448, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769004975094233, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:14.950981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:16:15 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:16:15.094356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:16:15 compute-0 ceph-mon[75031]: pgmap v1081: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 92 KiB/s wr, 12 op/s
Jan 21 14:16:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:16:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:16:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 115 KiB/s wr, 14 op/s
Jan 21 14:16:17 compute-0 ceph-mon[75031]: pgmap v1082: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 115 KiB/s wr, 14 op/s
Jan 21 14:16:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 62 KiB/s wr, 8 op/s
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:16:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:19 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:19 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:16:19.611 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:16:19 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:16:19.612 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:16:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:19 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:19 compute-0 ceph-mon[75031]: pgmap v1083: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 62 KiB/s wr, 8 op/s
Jan 21 14:16:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/c1e7aa2f-b3f5-4340-abef-4ba3f2f8f5fc'.
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp'
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp' to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta'
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "format": "json"}]: dispatch
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 82 KiB/s wr, 11 op/s
Jan 21 14:16:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "format": "json"}]: dispatch
Jan 21 14:16:20 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:21 compute-0 ceph-mon[75031]: pgmap v1084: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 82 KiB/s wr, 11 op/s
Jan 21 14:16:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 6 op/s
Jan 21 14:16:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:16:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981460682' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:16:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981460682' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:16:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:16:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:16:23 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf", "format": "json"}]: dispatch
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:23 compute-0 ceph-mon[75031]: pgmap v1085: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 44 KiB/s wr, 6 op/s
Jan 21 14:16:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3981460682' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3981460682' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:16:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:16:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 45 KiB/s wr, 8 op/s
Jan 21 14:16:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:16:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf", "format": "json"}]: dispatch
Jan 21 14:16:25 compute-0 ceph-mon[75031]: pgmap v1086: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 45 KiB/s wr, 8 op/s
Jan 21 14:16:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 10 op/s
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:16:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:27 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:27 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp'
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp' to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta'
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp'
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta.tmp' to config b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33/.meta'
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cd26f1f2-bc6e-4358-affb-44dc2065fabf, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/114899ef-653e-4eb2-b694-cbe1ddce5d94'.
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/.meta.tmp'
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/.meta.tmp' to config b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f/.meta'
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "format": "json"}]: dispatch
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 14:16:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:27 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:27 compute-0 ceph-mon[75031]: pgmap v1087: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 10 op/s
Jan 21 14:16:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:27 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:27 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 6 op/s
Jan 21 14:16:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf_daf7ebd9-3c4d-4d8d-a4bf-e96f88964a3b", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "snap_name": "cd26f1f2-bc6e-4358-affb-44dc2065fabf", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "format": "json"}]: dispatch
Jan 21 14:16:29 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:16:29.615 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:16:29 compute-0 ceph-mon[75031]: pgmap v1088: 305 pgs: 305 active+clean; 53 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 6 op/s
Jan 21 14:16:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 86 KiB/s wr, 12 op/s
Jan 21 14:16:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 21 14:16:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 21 14:16:31 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 21 14:16:31 compute-0 ceph-mon[75031]: pgmap v1089: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 86 KiB/s wr, 12 op/s
Jan 21 14:16:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 79 KiB/s wr, 11 op/s
Jan 21 14:16:32 compute-0 ceph-mon[75031]: osdmap e144: 3 total, 3 up, 3 in
Jan 21 14:16:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:16:33.906 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:16:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:16:33.907 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:16:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:16:33.907 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a444a045-3a18-4422-831d-838f3d178e33", "format": "json"}]: dispatch
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a444a045-3a18-4422-831d-838f3d178e33, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a444a045-3a18-4422-831d-838f3d178e33, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a444a045-3a18-4422-831d-838f3d178e33' of type subvolume
Jan 21 14:16:34 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:34.240+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a444a045-3a18-4422-831d-838f3d178e33' of type subvolume
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a444a045-3a18-4422-831d-838f3d178e33'' moved to trashcan
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a444a045-3a18-4422-831d-838f3d178e33, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 84 KiB/s wr, 71 op/s
Jan 21 14:16:34 compute-0 ceph-mon[75031]: pgmap v1091: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 79 KiB/s wr, 11 op/s
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:16:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:16:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:16:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:16:34 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a444a045-3a18-4422-831d-838f3d178e33", "format": "json"}]: dispatch
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a444a045-3a18-4422-831d-838f3d178e33", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:35 compute-0 ceph-mon[75031]: pgmap v1092: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 84 KiB/s wr, 71 op/s
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:16:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "format": "json"}]: dispatch
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:96494b3b-24ff-4794-b86c-c27bb64a476f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:96494b3b-24ff-4794-b86c-c27bb64a476f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:36 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:36.212+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '96494b3b-24ff-4794-b86c-c27bb64a476f' of type subvolume
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '96494b3b-24ff-4794-b86c-c27bb64a476f' of type subvolume
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/96494b3b-24ff-4794-b86c-c27bb64a476f'' moved to trashcan
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96494b3b-24ff-4794-b86c-c27bb64a476f, vol_name:cephfs) < ""
Jan 21 14:16:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 44 KiB/s wr, 97 op/s
Jan 21 14:16:36 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 14:16:37 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "format": "json"}]: dispatch
Jan 21 14:16:37 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "96494b3b-24ff-4794-b86c-c27bb64a476f", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:37 compute-0 ceph-mon[75031]: pgmap v1093: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 44 KiB/s wr, 97 op/s
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 44 KiB/s wr, 97 op/s
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37'.
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/.meta.tmp'
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/.meta.tmp' to config b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/.meta'
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "format": "json"}]: dispatch
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:16:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:16:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:16:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/f85cfe1c-ebd8-429a-98e6-8135bd6e60d3'.
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/.meta.tmp'
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/.meta.tmp' to config b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634/.meta'
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "format": "json"}]: dispatch
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 14:16:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:16:39
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images']
Jan 21 14:16:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:16:39 compute-0 ceph-mon[75031]: pgmap v1094: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 44 KiB/s wr, 97 op/s
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "format": "json"}]: dispatch
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:39 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:40 compute-0 podman[250103]: 2026-01-21 14:16:40.330889557 +0000 UTC m=+0.051721958 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 21 14:16:40 compute-0 podman[250102]: 2026-01-21 14:16:40.366204409 +0000 UTC m=+0.089296975 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 21 14:16:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 60 KiB/s wr, 97 op/s
Jan 21 14:16:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "format": "json"}]: dispatch
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:16:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 21 14:16:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 21 14:16:41 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:16:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:16:41 compute-0 ceph-mon[75031]: pgmap v1095: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 60 KiB/s wr, 97 op/s
Jan 21 14:16:41 compute-0 ceph-mon[75031]: osdmap e145: 3 total, 3 up, 3 in
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 60 KiB/s wr, 97 op/s
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118'.
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/.meta.tmp'
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/.meta.tmp' to config b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/.meta'
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "format": "json"}]: dispatch
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:16:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:42 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:16:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:16:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:16:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:16:43 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "format": "json"}]: dispatch
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5006c4a9-49c2-40a4-8229-4463bddd3634, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5006c4a9-49c2-40a4-8229-4463bddd3634, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:43.250+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5006c4a9-49c2-40a4-8229-4463bddd3634' of type subvolume
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5006c4a9-49c2-40a4-8229-4463bddd3634' of type subvolume
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5006c4a9-49c2-40a4-8229-4463bddd3634'' moved to trashcan
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5006c4a9-49c2-40a4-8229-4463bddd3634, vol_name:cephfs) < ""
Jan 21 14:16:43 compute-0 nova_compute[239261]: 2026-01-21 14:16:43.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:43 compute-0 ceph-mon[75031]: pgmap v1097: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 60 KiB/s wr, 97 op/s
Jan 21 14:16:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "format": "json"}]: dispatch
Jan 21 14:16:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:16:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:16:43 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:16:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 116 KiB/s wr, 40 op/s
Jan 21 14:16:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:16:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "format": "json"}]: dispatch
Jan 21 14:16:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5006c4a9-49c2-40a4-8229-4463bddd3634", "force": true, "format": "json"}]: dispatch
Jan 21 14:16:45 compute-0 nova_compute[239261]: 2026-01-21 14:16:45.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:45 compute-0 nova_compute[239261]: 2026-01-21 14:16:45.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:16:45 compute-0 nova_compute[239261]: 2026-01-21 14:16:45.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:16:45 compute-0 ceph-mon[75031]: pgmap v1098: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 116 KiB/s wr, 40 op/s
Jan 21 14:16:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 117 KiB/s wr, 12 op/s
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.459 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.460 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.540 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.540 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.541 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.541 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:16:46 compute-0 nova_compute[239261]: 2026-01-21 14:16:46.541 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:16:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 14:16:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:16:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175815341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.237 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.413 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.414 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5081MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.415 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.415 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.617 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.617 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:16:47 compute-0 nova_compute[239261]: 2026-01-21 14:16:47.636 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:16:48 compute-0 ceph-mon[75031]: pgmap v1099: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 117 KiB/s wr, 12 op/s
Jan 21 14:16:48 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:48 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3175815341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:16:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 14:16:48 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID Joe with tenant 183d8c03d481485397037ffe17a60995
Jan 21 14:16:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33814217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 nova_compute[239261]: 2026-01-21 14:16:48.228 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:16:48 compute-0 nova_compute[239261]: 2026-01-21 14:16:48.233 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:16:48 compute-0 nova_compute[239261]: 2026-01-21 14:16:48.317 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:16:48 compute-0 nova_compute[239261]: 2026-01-21 14:16:48.319 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:16:48 compute-0 nova_compute[239261]: 2026-01-21 14:16:48.320 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 117 KiB/s wr, 12 op/s
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:48 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/c83a9347-47a8-43c7-ae36-697341704e14'.
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "format": "json"}]: dispatch
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:16:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:48 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_9d63fab0-cc30-4952-b485-806c5f0f78c2", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/33814217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: pgmap v1100: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 117 KiB/s wr, 12 op/s
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "format": "json"}]: dispatch
Jan 21 14:16:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:50 compute-0 nova_compute[239261]: 2026-01-21 14:16:50.315 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:50 compute-0 nova_compute[239261]: 2026-01-21 14:16:50.315 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:50 compute-0 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:50 compute-0 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:50 compute-0 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:50 compute-0 nova_compute[239261]: 2026-01-21 14:16:50.316 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662290559085449 of space, bias 1.0, pg target 0.19986871677256346 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00022772385163210003 of space, bias 4.0, pg target 0.27326862195852003 quantized to 16 (current 16)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:16:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:16:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:51 compute-0 ceph-mon[75031]: pgmap v1101: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 14:16:51 compute-0 nova_compute[239261]: 2026-01-21 14:16:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 96 KiB/s wr, 10 op/s
Jan 21 14:16:52 compute-0 sudo[250191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:16:52 compute-0 sudo[250191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:52 compute-0 sudo[250191]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:52 compute-0 sudo[250216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 21 14:16:52 compute-0 sudo[250216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:16:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:16:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:16:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:16:52 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239'.
Jan 21 14:16:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:16:52 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/.meta.tmp'
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/.meta.tmp' to config b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/.meta'
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "format": "json"}]: dispatch
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:16:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "format": "json"}]: dispatch
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:16:52 compute-0 sudo[250216]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:16:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:16:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:52 compute-0 sudo[250262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:16:52 compute-0 sudo[250262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:53 compute-0 sudo[250262]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:53 compute-0 sudo[250287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:16:53 compute-0 sudo[250287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: pgmap v1102: 305 pgs: 305 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 96 KiB/s wr, 10 op/s
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "format": "json"}]: dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "format": "json"}]: dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:53 compute-0 sudo[250287]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:16:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:16:53 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:16:53 compute-0 sudo[250341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:16:53 compute-0 sudo[250341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:53 compute-0 sudo[250341]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:53 compute-0 sudo[250366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:16:53 compute-0 sudo[250366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.266700228 +0000 UTC m=+0.055061479 container create 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:16:54 compute-0 systemd[1]: Started libpod-conmon-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope.
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.240848675 +0000 UTC m=+0.029209976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:16:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.35678828 +0000 UTC m=+0.145149621 container init 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.366221707 +0000 UTC m=+0.154582998 container start 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.370415529 +0000 UTC m=+0.158776820 container attach 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:16:54 compute-0 pedantic_shannon[250419]: 167 167
Jan 21 14:16:54 compute-0 systemd[1]: libpod-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope: Deactivated successfully.
Jan 21 14:16:54 compute-0 conmon[250419]: conmon 2fedfc9dc4fc17808d3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope/container/memory.events
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.375111492 +0000 UTC m=+0.163472743 container died 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-69da20c2b5622dfda643f7eb72289fc31f21b14d564b8f4b3e70dd5be9d0fa49-merged.mount: Deactivated successfully.
Jan 21 14:16:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 119 KiB/s wr, 11 op/s
Jan 21 14:16:54 compute-0 podman[250403]: 2026-01-21 14:16:54.419795049 +0000 UTC m=+0.208156300 container remove 2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:16:54 compute-0 systemd[1]: libpod-conmon-2fedfc9dc4fc17808d3c8e978ccdec582fa8d145925779acebb4b82e258b62ce.scope: Deactivated successfully.
Jan 21 14:16:54 compute-0 podman[250442]: 2026-01-21 14:16:54.617265431 +0000 UTC m=+0.042327892 container create 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 14:16:54 compute-0 systemd[1]: Started libpod-conmon-870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c.scope.
Jan 21 14:16:54 compute-0 podman[250442]: 2026-01-21 14:16:54.59774048 +0000 UTC m=+0.022802961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:16:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:16:54 compute-0 podman[250442]: 2026-01-21 14:16:54.726000853 +0000 UTC m=+0.151063374 container init 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:16:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:16:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:16:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:16:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:16:54 compute-0 podman[250442]: 2026-01-21 14:16:54.735325498 +0000 UTC m=+0.160387939 container start 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:16:54 compute-0 podman[250442]: 2026-01-21 14:16:54.741033436 +0000 UTC m=+0.166096007 container attach 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:16:55 compute-0 nostalgic_jennings[250458]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:16:55 compute-0 nostalgic_jennings[250458]: --> All data devices are unavailable
Jan 21 14:16:55 compute-0 systemd[1]: libpod-870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c.scope: Deactivated successfully.
Jan 21 14:16:55 compute-0 podman[250442]: 2026-01-21 14:16:55.252151481 +0000 UTC m=+0.677213972 container died 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-acf8a2c3025f2b629df6e050012cdfb5895f234e7868d1c9c75ebf7b8cb4420d-merged.mount: Deactivated successfully.
Jan 21 14:16:55 compute-0 podman[250442]: 2026-01-21 14:16:55.310307413 +0000 UTC m=+0.735369904 container remove 870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:16:55 compute-0 systemd[1]: libpod-conmon-870cca9ebcaf1e31d76bab85376a891b0355332a60d36b7e4bf6368f818ba28c.scope: Deactivated successfully.
Jan 21 14:16:55 compute-0 sudo[250366]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:55 compute-0 sudo[250490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:16:55 compute-0 sudo[250490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:55 compute-0 sudo[250490]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:55 compute-0 sudo[250515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:16:55 compute-0 sudo[250515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:55 compute-0 ceph-mon[75031]: pgmap v1103: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 119 KiB/s wr, 11 op/s
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.821653584 +0000 UTC m=+0.054648888 container create d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:16:55 compute-0 systemd[1]: Started libpod-conmon-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope.
Jan 21 14:16:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.896103639 +0000 UTC m=+0.129098963 container init d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.804546791 +0000 UTC m=+0.037542115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.90193473 +0000 UTC m=+0.134930064 container start d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.906143331 +0000 UTC m=+0.139138625 container attach d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:16:55 compute-0 reverent_murdock[250567]: 167 167
Jan 21 14:16:55 compute-0 systemd[1]: libpod-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope: Deactivated successfully.
Jan 21 14:16:55 compute-0 conmon[250567]: conmon d72054ffb6267edbab14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope/container/memory.events
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.909041451 +0000 UTC m=+0.142036745 container died d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 14:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2345ebcb45c8c0ef74b24118d52407823dda686da64705f0267c97f837a6c355-merged.mount: Deactivated successfully.
Jan 21 14:16:55 compute-0 podman[250551]: 2026-01-21 14:16:55.958315669 +0000 UTC m=+0.191310973 container remove d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_murdock, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 14:16:55 compute-0 systemd[1]: libpod-conmon-d72054ffb6267edbab14680e1f7fa77d1aaa75921d451baf2004a5dbfa7cc9f4.scope: Deactivated successfully.
Jan 21 14:16:56 compute-0 podman[250590]: 2026-01-21 14:16:56.147674445 +0000 UTC m=+0.061857162 container create 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 14:16:56 compute-0 systemd[1]: Started libpod-conmon-6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a.scope.
Jan 21 14:16:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:16:56 compute-0 podman[250590]: 2026-01-21 14:16:56.119466025 +0000 UTC m=+0.033648842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:16:56 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:56 compute-0 podman[250590]: 2026-01-21 14:16:56.238215079 +0000 UTC m=+0.152397806 container init 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 14:16:56 compute-0 podman[250590]: 2026-01-21 14:16:56.25318801 +0000 UTC m=+0.167370747 container start 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:16:56 compute-0 podman[250590]: 2026-01-21 14:16:56.257972925 +0000 UTC m=+0.172155652 container attach 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "target_sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, target_sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/87b82afe-d56b-4746-a27d-d86b000ae695'.
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] tracking-id ce770c3c-0f1b-4ca4-a3e0-05207fc9c27f for path b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, target_sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.350+0000 7fc51b65f640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, af536fd1-8269-495b-9b12-b007bdeeab50)
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.376+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, af536fd1-8269-495b-9b12-b007bdeeab50) -- by 0 seconds
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 8 op/s
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 sharp_tu[250607]: {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:     "0": [
Jan 21 14:16:56 compute-0 sharp_tu[250607]:         {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "devices": [
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "/dev/loop3"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             ],
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_name": "ceph_lv0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_size": "21470642176",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "name": "ceph_lv0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "tags": {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cluster_name": "ceph",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.crush_device_class": "",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.encrypted": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.objectstore": "bluestore",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osd_id": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.type": "block",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.vdo": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.with_tpm": "0"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             },
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "type": "block",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "vg_name": "ceph_vg0"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:         }
Jan 21 14:16:56 compute-0 sharp_tu[250607]:     ],
Jan 21 14:16:56 compute-0 sharp_tu[250607]:     "1": [
Jan 21 14:16:56 compute-0 sharp_tu[250607]:         {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "devices": [
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "/dev/loop4"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             ],
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_name": "ceph_lv1",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_size": "21470642176",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "name": "ceph_lv1",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "tags": {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cluster_name": "ceph",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.crush_device_class": "",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.encrypted": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.objectstore": "bluestore",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osd_id": "1",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.type": "block",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.vdo": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.with_tpm": "0"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             },
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "type": "block",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "vg_name": "ceph_vg1"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:         }
Jan 21 14:16:56 compute-0 sharp_tu[250607]:     ],
Jan 21 14:16:56 compute-0 sharp_tu[250607]:     "2": [
Jan 21 14:16:56 compute-0 sharp_tu[250607]:         {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "devices": [
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "/dev/loop5"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             ],
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_name": "ceph_lv2",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_size": "21470642176",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "name": "ceph_lv2",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "tags": {
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.cluster_name": "ceph",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.crush_device_class": "",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.encrypted": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.objectstore": "bluestore",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osd_id": "2",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.type": "block",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.vdo": "0",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:                 "ceph.with_tpm": "0"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             },
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "type": "block",
Jan 21 14:16:56 compute-0 sharp_tu[250607]:             "vg_name": "ceph_vg2"
Jan 21 14:16:56 compute-0 sharp_tu[250607]:         }
Jan 21 14:16:56 compute-0 sharp_tu[250607]:     ]
Jan 21 14:16:56 compute-0 sharp_tu[250607]: }
Jan 21 14:16:56 compute-0 systemd[1]: libpod-6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a.scope: Deactivated successfully.
Jan 21 14:16:56 compute-0 podman[250641]: 2026-01-21 14:16:56.673921495 +0000 UTC m=+0.030524786 container died 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1a026a656b567710b34e1f34bf1f77e4c5f9db291feec74cb03fb9ad3cefa04-merged.mount: Deactivated successfully.
Jan 21 14:16:56 compute-0 podman[250641]: 2026-01-21 14:16:56.714693508 +0000 UTC m=+0.071296759 container remove 6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:16:56 compute-0 systemd[1]: libpod-conmon-6b95ed5d1443f48f9833a91fe6acb3f14f67250b9ab9d2180a90d022463ba43a.scope: Deactivated successfully.
Jan 21 14:16:56 compute-0 sudo[250515]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:16:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.snap/fb763622-636c-421d-a618-54f14cb70a37/c83a9347-47a8-43c7-ae36-697341704e14' to b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/87b82afe-d56b-4746-a27d-d86b000ae695'
Jan 21 14:16:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:16:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:16:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:56 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.tnwklj(active, since 32m)
Jan 21 14:16:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:56 compute-0 sudo[250656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:16:56 compute-0 sudo[250656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:56 compute-0 sudo[250656]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] untracking ce770c3c-0f1b-4ca4-a3e0-05207fc9c27f
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta.tmp' to config b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50/.meta'
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, af536fd1-8269-495b-9b12-b007bdeeab50)
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 sudo[250681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:16:56 compute-0 sudo[250681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 21 14:16:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 14:16:56 compute-0 ceph-mgr[75322]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Jan 21 14:16:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:16:56.923+0000 7fc516655640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.201010676 +0000 UTC m=+0.052595420 container create 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:16:57 compute-0 systemd[1]: Started libpod-conmon-13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2.scope.
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.174429285 +0000 UTC m=+0.026014109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:16:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.290471833 +0000 UTC m=+0.142056657 container init 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.297601445 +0000 UTC m=+0.149186189 container start 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.3019669 +0000 UTC m=+0.153551644 container attach 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 14:16:57 compute-0 practical_chatterjee[250736]: 167 167
Jan 21 14:16:57 compute-0 systemd[1]: libpod-13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2.scope: Deactivated successfully.
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.303342823 +0000 UTC m=+0.154927567 container died 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-118869ce6c07d6b654e32d2cadce44e2daefa6b3996e1ec88d36b9901d1c62f9-merged.mount: Deactivated successfully.
Jan 21 14:16:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 21 14:16:57 compute-0 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Jan 21 14:16:57 compute-0 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 21 14:16:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 21 14:16:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7fc5286a2bb0>
Jan 21 14:16:57 compute-0 podman[250719]: 2026-01-21 14:16:57.347945129 +0000 UTC m=+0.199529873 container remove 13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:16:57 compute-0 systemd[1]: libpod-conmon-13e4949f2f2e84eb230d4cfe805d6f3e15fda252a003067f28de2d14920e96c2.scope: Deactivated successfully.
Jan 21 14:16:57 compute-0 podman[250758]: 2026-01-21 14:16:57.568748883 +0000 UTC m=+0.052488936 container create 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 14:16:57 compute-0 systemd[1]: Started libpod-conmon-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope.
Jan 21 14:16:57 compute-0 podman[250758]: 2026-01-21 14:16:57.546685352 +0000 UTC m=+0.030425425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:16:57 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:16:57 compute-0 podman[250758]: 2026-01-21 14:16:57.678134501 +0000 UTC m=+0.161874575 container init 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:16:57 compute-0 podman[250758]: 2026-01-21 14:16:57.68678666 +0000 UTC m=+0.170526713 container start 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:16:57 compute-0 podman[250758]: 2026-01-21 14:16:57.690186182 +0000 UTC m=+0.173926235 container attach 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "target_sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:16:57 compute-0 ceph-mon[75031]: pgmap v1104: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 8 op/s
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:16:57 compute-0 ceph-mon[75031]: mgrmap e16: compute-0.tnwklj(active, since 32m)
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:16:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 14:16:58 compute-0 lvm[250854]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:16:58 compute-0 lvm[250853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:16:58 compute-0 lvm[250853]: VG ceph_vg0 finished
Jan 21 14:16:58 compute-0 lvm[250854]: VG ceph_vg1 finished
Jan 21 14:16:58 compute-0 lvm[250856]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:16:58 compute-0 lvm[250856]: VG ceph_vg2 finished
Jan 21 14:16:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 7 op/s
Jan 21 14:16:58 compute-0 hopeful_ride[250774]: {}
Jan 21 14:16:58 compute-0 systemd[1]: libpod-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope: Deactivated successfully.
Jan 21 14:16:58 compute-0 systemd[1]: libpod-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope: Consumed 1.372s CPU time.
Jan 21 14:16:58 compute-0 podman[250758]: 2026-01-21 14:16:58.532214057 +0000 UTC m=+1.015954110 container died 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0441d9976769e0147d7aabb94c45f3ce8e74c511a0b3ec3766c1c50f83dd7ee3-merged.mount: Deactivated successfully.
Jan 21 14:16:58 compute-0 podman[250758]: 2026-01-21 14:16:58.598991426 +0000 UTC m=+1.082731489 container remove 8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_ride, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:16:58 compute-0 systemd[1]: libpod-conmon-8ae2a1134334bdc59124fe20d9f7664d23c00842c1edf3951d6efc599e15b72a.scope: Deactivated successfully.
Jan 21 14:16:58 compute-0 sudo[250681]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:16:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:16:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:58 compute-0 sudo[250871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:16:58 compute-0 sudo[250871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:16:58 compute-0 sudo[250871]: pam_unix(sudo:session): session closed for user root
Jan 21 14:16:59 compute-0 ceph-mon[75031]: pgmap v1105: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 7 op/s
Jan 21 14:16:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:16:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:17:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 108 KiB/s wr, 13 op/s
Jan 21 14:17:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 14:17:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} v 0)
Jan 21 14:17:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 14:17:00 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID tempest-cephx-id-102251759 with tenant 6b53653c238d45b18082508e065d099c
Jan 21 14:17:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 14:17:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:00 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-102251759", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume authorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 14:17:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:01 compute-0 ceph-mon[75031]: pgmap v1106: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 108 KiB/s wr, 13 op/s
Jan 21 14:17:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 71 KiB/s wr, 9 op/s
Jan 21 14:17:03 compute-0 ceph-mon[75031]: pgmap v1107: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 71 KiB/s wr, 9 op/s
Jan 21 14:17:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:17:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 10 op/s
Jan 21 14:17:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:17:05 compute-0 ceph-mon[75031]: pgmap v1108: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 10 op/s
Jan 21 14:17:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 55 KiB/s wr, 9 op/s
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 14:17:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:17:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:17:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:17:06 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:06 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3'
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239
Jan 21 14:17:07 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239],prefix=session evict} (starting...)
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} v 0)
Jan 21 14:17:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"} v 0)
Jan 21 14:17:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"}]': finished
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume deauthorize, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-102251759, client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239
Jan 21 14:17:07 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=tempest-cephx-id-102251759,client_metadata.root=/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3/4677f60a-4f75-4468-9335-de3d6560e239],prefix=session evict} (starting...)
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-102251759, format:json, prefix:fs subvolume evict, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:07 compute-0 ceph-mon[75031]: pgmap v1109: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 55 KiB/s wr, 9 op/s
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-102251759", "format": "json"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"} : dispatch
Jan 21 14:17:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-102251759"}]': finished
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/d87b1044-560b-4671-b984-5a9e764bf8bb'.
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/.meta.tmp'
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/.meta.tmp' to config b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341/.meta'
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "format": "json"}]: dispatch
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 55 KiB/s wr, 8 op/s
Jan 21 14:17:08 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 14:17:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:08 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "format": "json"}]: dispatch
Jan 21 14:17:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "auth_id": "tempest-cephx-id-102251759", "format": "json"}]: dispatch
Jan 21 14:17:08 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af536fd1-8269-495b-9b12-b007bdeeab50, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af536fd1-8269-495b-9b12-b007bdeeab50'' moved to trashcan
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af536fd1-8269-495b-9b12-b007bdeeab50, vol_name:cephfs) < ""
Jan 21 14:17:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "format": "json"}]: dispatch
Jan 21 14:17:09 compute-0 ceph-mon[75031]: pgmap v1110: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 55 KiB/s wr, 8 op/s
Jan 21 14:17:10 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:17:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:10 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 109 KiB/s wr, 13 op/s
Jan 21 14:17:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:10 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "format": "json"}]: dispatch
Jan 21 14:17:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af536fd1-8269-495b-9b12-b007bdeeab50", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:17:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:11 compute-0 podman[250900]: 2026-01-21 14:17:11.383819322 +0000 UTC m=+0.100611517 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 21 14:17:11 compute-0 podman[250899]: 2026-01-21 14:17:11.391642681 +0000 UTC m=+0.108637491 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:17:11 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:11 compute-0 ceph-mon[75031]: pgmap v1111: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 109 KiB/s wr, 13 op/s
Jan 21 14:17:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Jan 21 14:17:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 14:17:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Jan 21 14:17:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Jan 21 14:17:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118
Jan 21 14:17:11 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2/e6ee0d7e-e80e-4cfd-9421-b1c84c73d118],prefix=session evict} (starting...)
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 68 KiB/s wr, 6 op/s
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "format": "json"}]: dispatch
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:12 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:12.598+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5409ffd4-eaac-442c-8587-e47fdf7d7341' of type subvolume
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5409ffd4-eaac-442c-8587-e47fdf7d7341' of type subvolume
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5409ffd4-eaac-442c-8587-e47fdf7d7341'' moved to trashcan
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5409ffd4-eaac-442c-8587-e47fdf7d7341, vol_name:cephfs) < ""
Jan 21 14:17:12 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Jan 21 14:17:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Jan 21 14:17:12 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Jan 21 14:17:12 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "auth_id": "Joe", "format": "json"}]: dispatch
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp'
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta.tmp' to config b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31/.meta'
Jan 21 14:17:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fb763622-636c-421d-a618-54f14cb70a37, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:17:13 compute-0 ceph-mon[75031]: pgmap v1112: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 68 KiB/s wr, 6 op/s
Jan 21 14:17:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "format": "json"}]: dispatch
Jan 21 14:17:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5409ffd4-eaac-442c-8587-e47fdf7d7341", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:14 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 21 14:17:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:17:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:17:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:17:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:17:14 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:14 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 125 KiB/s wr, 12 op/s
Jan 21 14:17:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37_42f3b54a-8269-4b88-8ab9-77648d8a58e3", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "snap_name": "fb763622-636c-421d-a618-54f14cb70a37", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:17:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:17:15 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "admin", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 14:17:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Jan 21 14:17:15 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Jan 21 14:17:15 compute-0 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 21 14:17:15 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 14:17:15 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:15.072+0000 7fc516655640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 21 14:17:15 compute-0 ceph-mgr[75322]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Jan 21 14:17:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:15 compute-0 ceph-mon[75031]: pgmap v1113: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 125 KiB/s wr, 12 op/s
Jan 21 14:17:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/be395b17-77ea-4d1f-a3d5-cf12644172e9'.
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/.meta.tmp'
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/.meta.tmp' to config b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3/.meta'
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "format": "json"}]: dispatch
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:16 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 114 KiB/s wr, 13 op/s
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "format": "json"}]: dispatch
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:16.540+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd96d5fd8-0350-40e0-a742-9103d3d18e31' of type subvolume
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd96d5fd8-0350-40e0-a742-9103d3d18e31' of type subvolume
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d96d5fd8-0350-40e0-a742-9103d3d18e31'' moved to trashcan
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d96d5fd8-0350-40e0-a742-9103d3d18e31, vol_name:cephfs) < ""
Jan 21 14:17:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 21 14:17:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "admin", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:17 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 21 14:17:17 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 21 14:17:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:17:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:17:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "format": "json"}]: dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: pgmap v1114: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 114 KiB/s wr, 13 op/s
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "format": "json"}]: dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d96d5fd8-0350-40e0-a742-9103d3d18e31", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: osdmap e146: 3 total, 3 up, 3 in
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 136 KiB/s wr, 14 op/s
Jan 21 14:17:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 14:17:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 21 14:17:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 14:17:18 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID david with tenant 183d8c03d481485397037ffe17a60995
Jan 21 14:17:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, tenant_id:183d8c03d481485397037ffe17a60995, vol_name:cephfs) < ""
Jan 21 14:17:19 compute-0 ceph-mon[75031]: pgmap v1116: 305 pgs: 305 active+clean; 57 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 136 KiB/s wr, 14 op/s
Jan 21 14:17:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "tenant_id": "183d8c03d481485397037ffe17a60995", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 14:17:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:19 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d4d9a3e7-c006-4c96-ab86-0ee694f36366", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "format": "json"}]: dispatch
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:19 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:19.926+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f149fb68-d34d-441e-9d9e-10acfdb751c3' of type subvolume
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f149fb68-d34d-441e-9d9e-10acfdb751c3' of type subvolume
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f149fb68-d34d-441e-9d9e-10acfdb751c3'' moved to trashcan
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f149fb68-d34d-441e-9d9e-10acfdb751c3, vol_name:cephfs) < ""
Jan 21 14:17:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 147 KiB/s wr, 17 op/s
Jan 21 14:17:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "format": "json"}]: dispatch
Jan 21 14:17:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f149fb68-d34d-441e-9d9e-10acfdb751c3", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 21 14:17:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 21 14:17:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:17:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:17:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:17:21 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:21 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:22 compute-0 ceph-mon[75031]: pgmap v1117: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 147 KiB/s wr, 17 op/s
Jan 21 14:17:22 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:22 compute-0 ceph-mon[75031]: osdmap e147: 3 total, 3 up, 3 in
Jan 21 14:17:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:17:22 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:17:22 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:17:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 98 KiB/s wr, 13 op/s
Jan 21 14:17:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:17:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877047264' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:17:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:17:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2877047264' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:17:23 compute-0 ceph-mon[75031]: pgmap v1119: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 98 KiB/s wr, 13 op/s
Jan 21 14:17:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2877047264' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:17:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2877047264' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/f8485c14-2515-499e-a291-140bfb971fb6'.
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/.meta.tmp'
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/.meta.tmp' to config b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/.meta'
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "format": "json"}]: dispatch
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:24 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:17:24.031 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:17:24 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:17:24.033 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:17:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "format": "json"}]: dispatch
Jan 21 14:17:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 141 KiB/s wr, 13 op/s
Jan 21 14:17:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:17:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:25 compute-0 ceph-mon[75031]: pgmap v1120: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 141 KiB/s wr, 13 op/s
Jan 21 14:17:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:26 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 14:17:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 21 14:17:26 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 14:17:26 compute-0 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Jan 21 14:17:26 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, tenant_id:6b53653c238d45b18082508e065d099c, vol_name:cephfs) < ""
Jan 21 14:17:26 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:26.093+0000 7fc516655640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Jan 21 14:17:26 compute-0 ceph-mgr[75322]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Jan 21 14:17:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 797 B/s rd, 126 KiB/s wr, 14 op/s
Jan 21 14:17:26 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 14:17:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "tenant_id": "6b53653c238d45b18082508e065d099c", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:28 compute-0 ceph-mon[75031]: pgmap v1121: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 797 B/s rd, 126 KiB/s wr, 14 op/s
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 113 KiB/s wr, 12 op/s
Jan 21 14:17:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:17:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:17:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:17:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:28 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:29 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:17:29.034 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:17:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:29 compute-0 ceph-mon[75031]: pgmap v1122: 305 pgs: 305 active+clean; 58 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 113 KiB/s wr, 12 op/s
Jan 21 14:17:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:17:29 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:17:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '61ae05c8-89f3-407b-bbf2-1e843fc0b15c'
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/f8485c14-2515-499e-a291-140bfb971fb6
Jan 21 14:17:30 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c/f8485c14-2515-499e-a291-140bfb971fb6],prefix=session evict} (starting...)
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:30 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 83 KiB/s wr, 8 op/s
Jan 21 14:17:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:31 compute-0 ceph-mon[75031]: pgmap v1123: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 83 KiB/s wr, 8 op/s
Jan 21 14:17:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.412465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051412494, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1367, "num_deletes": 261, "total_data_size": 1649275, "memory_usage": 1685616, "flush_reason": "Manual Compaction"}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051434094, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1619499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23467, "largest_seqno": 24833, "table_properties": {"data_size": 1612921, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15843, "raw_average_key_size": 20, "raw_value_size": 1598823, "raw_average_value_size": 2065, "num_data_blocks": 157, "num_entries": 774, "num_filter_entries": 774, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769004975, "oldest_key_time": 1769004975, "file_creation_time": 1769005051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 21693 microseconds, and 6397 cpu microseconds.
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.434153) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1619499 bytes OK
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.434176) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.436626) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.436647) EVENT_LOG_v1 {"time_micros": 1769005051436641, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.436668) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1642565, prev total WAL file size 1642565, number of live WAL files 2.
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.437423) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1581KB)], [53(9166KB)]
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051437458, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11005669, "oldest_snapshot_seqno": -1}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5336 keys, 10902977 bytes, temperature: kUnknown
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051514313, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10902977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10862957, "index_size": 25574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 132410, "raw_average_key_size": 24, "raw_value_size": 10762830, "raw_average_value_size": 2017, "num_data_blocks": 1071, "num_entries": 5336, "num_filter_entries": 5336, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.514687) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10902977 bytes
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.516810) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.0 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5880, records dropped: 544 output_compression: NoCompression
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.516842) EVENT_LOG_v1 {"time_micros": 1769005051516827, "job": 28, "event": "compaction_finished", "compaction_time_micros": 76960, "compaction_time_cpu_micros": 26565, "output_level": 6, "num_output_files": 1, "total_output_size": 10902977, "num_input_records": 5880, "num_output_records": 5336, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051517537, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005051521221, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.437355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:17:31 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:17:31.521314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:17:32 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:17:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:17:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:32 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 277 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 14:17:32 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:32 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:33 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:17:33 compute-0 ceph-mon[75031]: pgmap v1124: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 277 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 14:17:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:33 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:17:33.908 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:17:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:17:33.909 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:17:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:17:33.909 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:17:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Jan 21 14:17:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 14:17:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Jan 21 14:17:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Jan 21 14:17:33 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37
Jan 21 14:17:33 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366/5021dd0b-410c-4556-8ea7-3591d44d4e37],prefix=session evict} (starting...)
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/b078df4b-38f9-4410-ab89-a5c09da3b1cb'.
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp'
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp' to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta'
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "format": "json"}]: dispatch
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:17:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:34 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 112 KiB/s wr, 10 op/s
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "david", "format": "json"}]: dispatch
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:34 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "format": "json"}]: dispatch
Jan 21 14:17:35 compute-0 ceph-mon[75031]: pgmap v1125: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 112 KiB/s wr, 10 op/s
Jan 21 14:17:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 83 KiB/s wr, 9 op/s
Jan 21 14:17:38 compute-0 ceph-mon[75031]: pgmap v1126: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 83 KiB/s wr, 9 op/s
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372", "format": "json"}]: dispatch
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 82 KiB/s wr, 7 op/s
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:17:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:17:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:17:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:38 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "format": "json"}]: dispatch
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:38.934+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61ae05c8-89f3-407b-bbf2-1e843fc0b15c' of type subvolume
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61ae05c8-89f3-407b-bbf2-1e843fc0b15c' of type subvolume
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/61ae05c8-89f3-407b-bbf2-1e843fc0b15c'' moved to trashcan
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61ae05c8-89f3-407b-bbf2-1e843fc0b15c, vol_name:cephfs) < ""
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372", "format": "json"}]: dispatch
Jan 21 14:17:39 compute-0 ceph-mon[75031]: pgmap v1127: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 82 KiB/s wr, 7 op/s
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "format": "json"}]: dispatch
Jan 21 14:17:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61ae05c8-89f3-407b-bbf2-1e843fc0b15c", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:17:39
Jan 21 14:17:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:17:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:17:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'backups']
Jan 21 14:17:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:17:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 60 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 11 op/s
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:17:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:17:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:41 compute-0 ceph-mon[75031]: pgmap v1128: 305 pgs: 305 active+clean; 60 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 11 op/s
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:17:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:42 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:42 compute-0 podman[250951]: 2026-01-21 14:17:42.326498209 +0000 UTC m=+0.046873691 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:17:42 compute-0 podman[250950]: 2026-01-21 14:17:42.352043435 +0000 UTC m=+0.075128592 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 60 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 7 op/s
Jan 21 14:17:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "format": "json"}]: dispatch
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:42 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:42.509+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3' of type subvolume
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3' of type subvolume
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3'' moved to trashcan
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3, vol_name:cephfs) < ""
Jan 21 14:17:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:43 compute-0 ceph-mon[75031]: pgmap v1129: 305 pgs: 305 active+clean; 60 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 7 op/s
Jan 21 14:17:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "format": "json"}]: dispatch
Jan 21 14:17:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a1d41ab-f2a7-4734-8f5d-029c6fa0d7e3", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/c71696ba-936e-4f18-89f5-ca8196c4bb94'.
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/.meta.tmp'
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/.meta.tmp' to config b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd/.meta'
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "format": "json"}]: dispatch
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 14:17:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 14:17:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 10 op/s
Jan 21 14:17:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "format": "json"}]: dispatch
Jan 21 14:17:44 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:44 compute-0 nova_compute[239261]: 2026-01-21 14:17:44.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/d69cf44b-7a0d-437d-8b78-55611c70851f'.
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/.meta.tmp'
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/.meta.tmp' to config b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82/.meta'
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "format": "json"}]: dispatch
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:45 compute-0 ceph-mon[75031]: pgmap v1130: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 10 op/s
Jan 21 14:17:45 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:45 compute-0 nova_compute[239261]: 2026-01-21 14:17:45.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:17:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:17:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:17:45 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:45 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:45 compute-0 nova_compute[239261]: 2026-01-21 14:17:45.897 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:17:45 compute-0 nova_compute[239261]: 2026-01-21 14:17:45.897 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:17:45 compute-0 nova_compute[239261]: 2026-01-21 14:17:45.898 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:17:45 compute-0 nova_compute[239261]: 2026-01-21 14:17:45.898 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:17:45 compute-0 nova_compute[239261]: 2026-01-21 14:17:45.899 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:17:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 9 op/s
Jan 21 14:17:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:17:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219510721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.501 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "format": "json"}]: dispatch
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9d63fab0-cc30-4952-b485-806c5f0f78c2' of type subvolume
Jan 21 14:17:46 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:46.535+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9d63fab0-cc30-4952-b485-806c5f0f78c2' of type subvolume
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9d63fab0-cc30-4952-b485-806c5f0f78c2'' moved to trashcan
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9d63fab0-cc30-4952-b485-806c5f0f78c2, vol_name:cephfs) < ""
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "format": "json"}]: dispatch
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/219510721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.652 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.653 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.653 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.653 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.725 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.725 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:17:46 compute-0 nova_compute[239261]: 2026-01-21 14:17:46.742 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:17:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:17:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555477584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:17:47 compute-0 nova_compute[239261]: 2026-01-21 14:17:47.263 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:17:47 compute-0 nova_compute[239261]: 2026-01-21 14:17:47.269 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:17:47 compute-0 nova_compute[239261]: 2026-01-21 14:17:47.406 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:17:47 compute-0 nova_compute[239261]: 2026-01-21 14:17:47.409 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:17:47 compute-0 nova_compute[239261]: 2026-01-21 14:17:47.410 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "format": "json"}]: dispatch
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4934ef18-6cd5-442b-a8fa-227c3608b0bd' of type subvolume
Jan 21 14:17:47 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:47.505+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4934ef18-6cd5-442b-a8fa-227c3608b0bd' of type subvolume
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4934ef18-6cd5-442b-a8fa-227c3608b0bd'' moved to trashcan
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4934ef18-6cd5-442b-a8fa-227c3608b0bd, vol_name:cephfs) < ""
Jan 21 14:17:47 compute-0 ceph-mon[75031]: pgmap v1131: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 9 op/s
Jan 21 14:17:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "format": "json"}]: dispatch
Jan 21 14:17:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9d63fab0-cc30-4952-b485-806c5f0f78c2", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3555477584' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:17:48 compute-0 nova_compute[239261]: 2026-01-21 14:17:48.410 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:48 compute-0 nova_compute[239261]: 2026-01-21 14:17:48.411 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:17:48 compute-0 nova_compute[239261]: 2026-01-21 14:17:48.411 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:17:48 compute-0 nova_compute[239261]: 2026-01-21 14:17:48.424 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:17:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Jan 21 14:17:48 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "format": "json"}]: dispatch
Jan 21 14:17:48 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4934ef18-6cd5-442b-a8fa-227c3608b0bd", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/37873579-fd8d-4e0a-980b-73d1a1678e9b'.
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/.meta.tmp'
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/.meta.tmp' to config b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226/.meta'
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "format": "json"}]: dispatch
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 14:17:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:17:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:49 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:49 compute-0 nova_compute[239261]: 2026-01-21 14:17:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:49 compute-0 nova_compute[239261]: 2026-01-21 14:17:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:49 compute-0 nova_compute[239261]: 2026-01-21 14:17:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:17:49 compute-0 ceph-mon[75031]: pgmap v1132: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Jan 21 14:17:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:49 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "admin", "format": "json"}]: dispatch
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Jan 21 14:17:50 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:50.371+0000 7fc516655640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 61 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 140 KiB/s wr, 15 op/s
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "format": "json"}]: dispatch
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4d9a3e7-c006-4c96-ab86-0ee694f36366' of type subvolume
Jan 21 14:17:50 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:50.561+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4d9a3e7-c006-4c96-ab86-0ee694f36366' of type subvolume
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d4d9a3e7-c006-4c96-ab86-0ee694f36366'' moved to trashcan
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4d9a3e7-c006-4c96-ab86-0ee694f36366, vol_name:cephfs) < ""
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662283883303134 of space, bias 1.0, pg target 0.199868516499094 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0003203949192705631 of space, bias 4.0, pg target 0.3844739031246757 quantized to 16 (current 16)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:17:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:17:50 compute-0 nova_compute[239261]: 2026-01-21 14:17:50.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "format": "json"}]: dispatch
Jan 21 14:17:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:17:50 compute-0 nova_compute[239261]: 2026-01-21 14:17:50.893 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:50 compute-0 nova_compute[239261]: 2026-01-21 14:17:50.894 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/1245fb95-20a1-49fb-a04e-48144d861baf'.
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/.meta.tmp'
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/.meta.tmp' to config b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6/.meta'
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "format": "json"}]: dispatch
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 14:17:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 14:17:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:17:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:51 compute-0 nova_compute[239261]: 2026-01-21 14:17:51.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "auth_id": "admin", "format": "json"}]: dispatch
Jan 21 14:17:51 compute-0 ceph-mon[75031]: pgmap v1133: 305 pgs: 305 active+clean; 61 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 140 KiB/s wr, 15 op/s
Jan 21 14:17:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "format": "json"}]: dispatch
Jan 21 14:17:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4d9a3e7-c006-4c96-ab86-0ee694f36366", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:51 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 61 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "format": "json"}]: dispatch
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a2a02f3a-dc86-4c41-ae4c-20c17fe75226' of type subvolume
Jan 21 14:17:52 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:52.660+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a2a02f3a-dc86-4c41-ae4c-20c17fe75226' of type subvolume
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a2a02f3a-dc86-4c41-ae4c-20c17fe75226'' moved to trashcan
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a2a02f3a-dc86-4c41-ae4c-20c17fe75226, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:17:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:17:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:17:52 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:17:52 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:17:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:17:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:17:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "format": "json"}]: dispatch
Jan 21 14:17:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:17:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:17:53 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:17:53 compute-0 nova_compute[239261]: 2026-01-21 14:17:53.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:17:54 compute-0 ceph-mon[75031]: pgmap v1134: 305 pgs: 305 active+clean; 61 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 14:17:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "format": "json"}]: dispatch
Jan 21 14:17:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a2a02f3a-dc86-4c41-ae4c-20c17fe75226", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:17:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 147 KiB/s wr, 14 op/s
Jan 21 14:17:55 compute-0 ceph-mon[75031]: pgmap v1135: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 147 KiB/s wr, 14 op/s
Jan 21 14:17:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 107 KiB/s wr, 13 op/s
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:17:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:17:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:17:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "format": "json"}]: dispatch
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:56 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:56.937+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93827b21-dc3b-4f90-ab80-d532ba42cf82' of type subvolume
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '93827b21-dc3b-4f90-ab80-d532ba42cf82' of type subvolume
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/93827b21-dc3b-4f90-ab80-d532ba42cf82'' moved to trashcan
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:93827b21-dc3b-4f90-ab80-d532ba42cf82, vol_name:cephfs) < ""
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "format": "json"}]: dispatch
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:17:57 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:17:57.047+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6' of type subvolume
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6' of type subvolume
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6'' moved to trashcan
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:17:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6, vol_name:cephfs) < ""
Jan 21 14:17:57 compute-0 ceph-mon[75031]: pgmap v1136: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 107 KiB/s wr, 13 op/s
Jan 21 14:17:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:17:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:17:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:17:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:17:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "format": "json"}]: dispatch
Jan 21 14:17:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93827b21-dc3b-4f90-ab80-d532ba42cf82", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 14:17:58 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "format": "json"}]: dispatch
Jan 21 14:17:58 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e583eb2-e6ab-4a68-a607-ecd6ca50e3b6", "force": true, "format": "json"}]: dispatch
Jan 21 14:17:58 compute-0 sudo[251046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:17:58 compute-0 sudo[251046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:17:58 compute-0 sudo[251046]: pam_unix(sudo:session): session closed for user root
Jan 21 14:17:58 compute-0 sudo[251071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 21 14:17:58 compute-0 sudo[251071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:17:59 compute-0 podman[251138]: 2026-01-21 14:17:59.338494363 +0000 UTC m=+0.062589465 container exec cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:17:59 compute-0 podman[251138]: 2026-01-21 14:17:59.439935647 +0000 UTC m=+0.164030659 container exec_died cfe4b6f08f6d2a2c51e9ed3e1a16d5b8c199bf12ed0f0dd501feacf767ec2649 (image=quay.io/ceph/ceph:v20, name=ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:17:59 compute-0 ceph-mon[75031]: pgmap v1137: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 11 op/s
Jan 21 14:18:00 compute-0 sudo[251071]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:00 compute-0 sudo[251328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:18:00 compute-0 sudo[251328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:00 compute-0 sudo[251328]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:00 compute-0 sudo[251353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:18:00 compute-0 sudo[251353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:00 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 146 KiB/s wr, 15 op/s
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "aefb1c02-8305-4e9b-9f91-87659561ca53", "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 14:18:00 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 14:18:00 compute-0 sudo[251353]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:18:00 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:18:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:18:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:18:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:18:01 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:18:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:18:01 compute-0 sudo[251410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:18:01 compute-0 sudo[251410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:01 compute-0 sudo[251410]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:01 compute-0 sudo[251435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:18:01 compute-0 sudo[251435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: pgmap v1138: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 146 KiB/s wr, 15 op/s
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "aefb1c02-8305-4e9b-9f91-87659561ca53", "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:18:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.580241032 +0000 UTC m=+0.052162721 container create aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 14:18:01 compute-0 systemd[1]: Started libpod-conmon-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope.
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.552708488 +0000 UTC m=+0.024630217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:18:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.679932325 +0000 UTC m=+0.151854034 container init aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.686886697 +0000 UTC m=+0.158808376 container start aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.690045641 +0000 UTC m=+0.161967340 container attach aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 21 14:18:01 compute-0 systemd[1]: libpod-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope: Deactivated successfully.
Jan 21 14:18:01 compute-0 stoic_mayer[251488]: 167 167
Jan 21 14:18:01 compute-0 conmon[251488]: conmon aabd9d317b4927eff1eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope/container/memory.events
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.694493005 +0000 UTC m=+0.166414694 container died aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4047451b0460fc3ca7d64588d7ad3e57bb1eb6e94e02ca75f3f4410f5b67884e-merged.mount: Deactivated successfully.
Jan 21 14:18:01 compute-0 podman[251472]: 2026-01-21 14:18:01.737700066 +0000 UTC m=+0.209621745 container remove aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:18:01 compute-0 systemd[1]: libpod-conmon-aabd9d317b4927eff1eb6b8465a24aabd706009471c141d5fffde8636ef92316.scope: Deactivated successfully.
Jan 21 14:18:01 compute-0 podman[251510]: 2026-01-21 14:18:01.941680768 +0000 UTC m=+0.047442830 container create 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 21 14:18:01 compute-0 systemd[1]: Started libpod-conmon-3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8.scope.
Jan 21 14:18:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:02 compute-0 podman[251510]: 2026-01-21 14:18:01.923606376 +0000 UTC m=+0.029368468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:18:02 compute-0 podman[251510]: 2026-01-21 14:18:02.024014024 +0000 UTC m=+0.129776106 container init 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 14:18:02 compute-0 podman[251510]: 2026-01-21 14:18:02.038909643 +0000 UTC m=+0.144671695 container start 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:18:02 compute-0 podman[251510]: 2026-01-21 14:18:02.041967855 +0000 UTC m=+0.147729917 container attach 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/4ada938c-7ccc-47bb-8a7d-06e39fa21b91'.
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/.meta.tmp'
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/.meta.tmp' to config b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e/.meta'
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "format": "json"}]: dispatch
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 14:18:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:02 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:02 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 8 op/s
Jan 21 14:18:02 compute-0 vibrant_euler[251527]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:18:02 compute-0 vibrant_euler[251527]: --> All data devices are unavailable
Jan 21 14:18:02 compute-0 systemd[1]: libpod-3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8.scope: Deactivated successfully.
Jan 21 14:18:02 compute-0 podman[251510]: 2026-01-21 14:18:02.524929744 +0000 UTC m=+0.630691906 container died 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 21 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dc311626209a535ded264797aa0829ec1ad8e2cf308088375bcfa25565fa388-merged.mount: Deactivated successfully.
Jan 21 14:18:02 compute-0 podman[251510]: 2026-01-21 14:18:02.567113061 +0000 UTC m=+0.672875113 container remove 3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 14:18:02 compute-0 systemd[1]: libpod-conmon-3e47c9517110984420f5957915993300d1cb20fdd2a6356a494560067efa32d8.scope: Deactivated successfully.
Jan 21 14:18:02 compute-0 sudo[251435]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:02 compute-0 sudo[251558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:18:02 compute-0 sudo[251558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:02 compute-0 sudo[251558]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:02 compute-0 sudo[251583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:18:02 compute-0 sudo[251583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.041765666 +0000 UTC m=+0.049132890 container create 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:18:03 compute-0 systemd[1]: Started libpod-conmon-22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf.scope.
Jan 21 14:18:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.020043249 +0000 UTC m=+0.027410503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.12910713 +0000 UTC m=+0.136474374 container init 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.136678337 +0000 UTC m=+0.144045561 container start 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.140209259 +0000 UTC m=+0.147576503 container attach 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:18:03 compute-0 agitated_mcclintock[251637]: 167 167
Jan 21 14:18:03 compute-0 systemd[1]: libpod-22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf.scope: Deactivated successfully.
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.143433635 +0000 UTC m=+0.150800859 container died 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a3da2ce1e88f00dfcf93aa8f8b518d895f2a07b52b63880cd63d8d374b90f2-merged.mount: Deactivated successfully.
Jan 21 14:18:03 compute-0 podman[251621]: 2026-01-21 14:18:03.188406447 +0000 UTC m=+0.195773671 container remove 22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:18:03 compute-0 systemd[1]: libpod-conmon-22225eefb94c3b314f67682d160b86b90d3abd45ba52d2ea91f6b30434fa9cbf.scope: Deactivated successfully.
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.363233328 +0000 UTC m=+0.045077936 container create fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:18:03 compute-0 systemd[1]: Started libpod-conmon-fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd.scope.
Jan 21 14:18:03 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:03 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:03 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "format": "json"}]: dispatch
Jan 21 14:18:03 compute-0 ceph-mon[75031]: pgmap v1139: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 8 op/s
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.343604789 +0000 UTC m=+0.025449417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:18:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.449155398 +0000 UTC m=+0.131000036 container init fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.455434425 +0000 UTC m=+0.137279033 container start fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.458673711 +0000 UTC m=+0.140518339 container attach fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 14:18:03 compute-0 loving_johnson[251678]: {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:     "0": [
Jan 21 14:18:03 compute-0 loving_johnson[251678]:         {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "devices": [
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "/dev/loop3"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             ],
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_name": "ceph_lv0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_size": "21470642176",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "name": "ceph_lv0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "tags": {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cluster_name": "ceph",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.crush_device_class": "",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.encrypted": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.objectstore": "bluestore",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osd_id": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.type": "block",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.vdo": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.with_tpm": "0"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             },
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "type": "block",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "vg_name": "ceph_vg0"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:         }
Jan 21 14:18:03 compute-0 loving_johnson[251678]:     ],
Jan 21 14:18:03 compute-0 loving_johnson[251678]:     "1": [
Jan 21 14:18:03 compute-0 loving_johnson[251678]:         {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "devices": [
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "/dev/loop4"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             ],
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_name": "ceph_lv1",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_size": "21470642176",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "name": "ceph_lv1",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "tags": {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cluster_name": "ceph",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.crush_device_class": "",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.encrypted": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.objectstore": "bluestore",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osd_id": "1",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.type": "block",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.vdo": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.with_tpm": "0"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             },
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "type": "block",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "vg_name": "ceph_vg1"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:         }
Jan 21 14:18:03 compute-0 loving_johnson[251678]:     ],
Jan 21 14:18:03 compute-0 loving_johnson[251678]:     "2": [
Jan 21 14:18:03 compute-0 loving_johnson[251678]:         {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "devices": [
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "/dev/loop5"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             ],
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_name": "ceph_lv2",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_size": "21470642176",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "name": "ceph_lv2",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "tags": {
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.cluster_name": "ceph",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.crush_device_class": "",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.encrypted": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.objectstore": "bluestore",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osd_id": "2",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.type": "block",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.vdo": "0",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:                 "ceph.with_tpm": "0"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             },
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "type": "block",
Jan 21 14:18:03 compute-0 loving_johnson[251678]:             "vg_name": "ceph_vg2"
Jan 21 14:18:03 compute-0 loving_johnson[251678]:         }
Jan 21 14:18:03 compute-0 loving_johnson[251678]:     ]
Jan 21 14:18:03 compute-0 loving_johnson[251678]: }
Jan 21 14:18:03 compute-0 systemd[1]: libpod-fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd.scope: Deactivated successfully.
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.751502112 +0000 UTC m=+0.433346710 container died fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 14:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1472fb157ea4e617a757eae9603b83d899337e7a23ad73f800b6e3fea72492cd-merged.mount: Deactivated successfully.
Jan 21 14:18:03 compute-0 podman[251661]: 2026-01-21 14:18:03.796612097 +0000 UTC m=+0.478456715 container remove fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:18:03 compute-0 systemd[1]: libpod-conmon-fd77b186873a2209d123b32656bc071b8967e4e0e9c57d5fdb1b0cb7ab16e8bd.scope: Deactivated successfully.
Jan 21 14:18:03 compute-0 sudo[251583]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:03 compute-0 sudo[251701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:18:03 compute-0 sudo[251701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:03 compute-0 sudo[251701]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:03 compute-0 sudo[251726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:18:03 compute-0 sudo[251726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "aefb1c02-8305-4e9b-9f91-87659561ca53", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:aefb1c02-8305-4e9b-9f91-87659561ca53, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.237958864 +0000 UTC m=+0.041920272 container create 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:18:04 compute-0 systemd[1]: Started libpod-conmon-66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5.scope.
Jan 21 14:18:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.220212378 +0000 UTC m=+0.024173806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.328474341 +0000 UTC m=+0.132435779 container init 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.334368098 +0000 UTC m=+0.138329506 container start 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:18:04 compute-0 compassionate_galileo[251779]: 167 167
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.339352045 +0000 UTC m=+0.143313453 container attach 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 14:18:04 compute-0 systemd[1]: libpod-66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5.scope: Deactivated successfully.
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.340084292 +0000 UTC m=+0.144045690 container died 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b58e06fa2acb276e43418ea56e5a966e8ba5e02f516178947f18e99d9beafd-merged.mount: Deactivated successfully.
Jan 21 14:18:04 compute-0 podman[251763]: 2026-01-21 14:18:04.382773411 +0000 UTC m=+0.186734839 container remove 66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_galileo, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 21 14:18:04 compute-0 systemd[1]: libpod-conmon-66907d8648618b1a5ade6e5d117c3fae887928bda62bc77cc2a42db548b43ee5.scope: Deactivated successfully.
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 9 op/s
Jan 21 14:18:04 compute-0 podman[251802]: 2026-01-21 14:18:04.549542163 +0000 UTC m=+0.041686037 container create 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:18:04 compute-0 systemd[1]: Started libpod-conmon-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope.
Jan 21 14:18:04 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:18:04 compute-0 podman[251802]: 2026-01-21 14:18:04.529853412 +0000 UTC m=+0.021997306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:18:04 compute-0 podman[251802]: 2026-01-21 14:18:04.62635197 +0000 UTC m=+0.118495844 container init 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:18:04 compute-0 podman[251802]: 2026-01-21 14:18:04.633277842 +0000 UTC m=+0.125421716 container start 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 14:18:04 compute-0 podman[251802]: 2026-01-21 14:18:04.636792934 +0000 UTC m=+0.128936878 container attach 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:04 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:04 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:04 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:05 compute-0 lvm[251896]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:18:05 compute-0 lvm[251897]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:18:05 compute-0 lvm[251897]: VG ceph_vg1 finished
Jan 21 14:18:05 compute-0 lvm[251896]: VG ceph_vg0 finished
Jan 21 14:18:05 compute-0 lvm[251899]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:18:05 compute-0 lvm[251899]: VG ceph_vg2 finished
Jan 21 14:18:05 compute-0 inspiring_hugle[251818]: {}
Jan 21 14:18:05 compute-0 systemd[1]: libpod-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope: Deactivated successfully.
Jan 21 14:18:05 compute-0 systemd[1]: libpod-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope: Consumed 1.427s CPU time.
Jan 21 14:18:05 compute-0 podman[251802]: 2026-01-21 14:18:05.533824582 +0000 UTC m=+1.025968516 container died 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "format": "json"}]: dispatch
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '03db482a-4a9f-44b3-ba43-fe5ff12e229e' of type subvolume
Jan 21 14:18:05 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:05.870+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '03db482a-4a9f-44b3-ba43-fe5ff12e229e' of type subvolume
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/03db482a-4a9f-44b3-ba43-fe5ff12e229e'' moved to trashcan
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:03db482a-4a9f-44b3-ba43-fe5ff12e229e, vol_name:cephfs) < ""
Jan 21 14:18:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 66 KiB/s wr, 9 op/s
Jan 21 14:18:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "aefb1c02-8305-4e9b-9f91-87659561ca53", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:07 compute-0 ceph-mon[75031]: pgmap v1140: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 9 op/s
Jan 21 14:18:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:07 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4ad93e9ccd78dfb14995d4f01c85998195086f7b472a2e765042b94f9df8012-merged.mount: Deactivated successfully.
Jan 21 14:18:08 compute-0 podman[251802]: 2026-01-21 14:18:08.111155892 +0000 UTC m=+3.603299756 container remove 2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:18:08 compute-0 systemd[1]: libpod-conmon-2a1bf0cde247a6dcf61049e21aaebbe09ca150041f80bab38bae3c1649c70841.scope: Deactivated successfully.
Jan 21 14:18:08 compute-0 sudo[251726]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:18:08 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:18:08 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:08 compute-0 sudo[251917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:18:08 compute-0 sudo[251917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:18:08 compute-0 sudo[251917]: pam_unix(sudo:session): session closed for user root
Jan 21 14:18:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 65 KiB/s wr, 7 op/s
Jan 21 14:18:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "format": "json"}]: dispatch
Jan 21 14:18:08 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "03db482a-4a9f-44b3-ba43-fe5ff12e229e", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:08 compute-0 ceph-mon[75031]: pgmap v1141: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 66 KiB/s wr, 9 op/s
Jan 21 14:18:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:08 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:18:09 compute-0 ceph-mon[75031]: pgmap v1142: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 65 KiB/s wr, 7 op/s
Jan 21 14:18:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 103 KiB/s wr, 11 op/s
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ada2df0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ada28b0>)]
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:18:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:18:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:11 compute-0 ceph-mon[75031]: pgmap v1143: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 103 KiB/s wr, 11 op/s
Jan 21 14:18:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "701e45e4-89b7-4d59-81cf-4a02e67d640b", "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 14:18:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Jan 21 14:18:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 6 op/s
Jan 21 14:18:12 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.tnwklj(active, since 33m)
Jan 21 14:18:13 compute-0 podman[251943]: 2026-01-21 14:18:13.344748641 +0000 UTC m=+0.062523434 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:13 compute-0 podman[251942]: 2026-01-21 14:18:13.382981295 +0000 UTC m=+0.100711387 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:18:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:18:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:13 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:13 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:13 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "701e45e4-89b7-4d59-81cf-4a02e67d640b", "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:13 compute-0 ceph-mon[75031]: pgmap v1144: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 6 op/s
Jan 21 14:18:13 compute-0 ceph-mon[75031]: mgrmap e17: compute-0.tnwklj(active, since 33m)
Jan 21 14:18:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:13 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 7 op/s
Jan 21 14:18:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:15 compute-0 ceph-mon[75031]: pgmap v1145: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 7 op/s
Jan 21 14:18:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 14:18:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "701e45e4-89b7-4d59-81cf-4a02e67d640b", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 14:18:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:701e45e4-89b7-4d59-81cf-4a02e67d640b, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:18:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/f9498102-fea3-4cc1-a405-bc1e6a9a7838'.
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/.meta.tmp'
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/.meta.tmp' to config b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a/.meta'
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "format": "json"}]: dispatch
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/a3719bda-5e93-4bc8-a6ba-43796639d277'.
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/.meta.tmp'
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/.meta.tmp' to config b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc/.meta'
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "format": "json"}]: dispatch
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 14:18:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:17 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-mon[75031]: pgmap v1146: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 14:18:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "701e45e4-89b7-4d59-81cf-4a02e67d640b", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:17 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:17 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:17 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 66 KiB/s wr, 6 op/s
Jan 21 14:18:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:18:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "format": "json"}]: dispatch
Jan 21 14:18:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "format": "json"}]: dispatch
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/1f606e4a-db26-4f08-a985-162ca262e6fc'.
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp'
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp' to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta'
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "format": "json"}]: dispatch
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:19 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:19 compute-0 ceph-mon[75031]: pgmap v1147: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 66 KiB/s wr, 6 op/s
Jan 21 14:18:19 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 99 KiB/s wr, 10 op/s
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:18:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:20 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:18:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:18:20 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:20 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "format": "json"}]: dispatch
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:20.896+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '619f82dd-2461-43fe-994a-71a6fb22cc9a' of type subvolume
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '619f82dd-2461-43fe-994a-71a6fb22cc9a' of type subvolume
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/619f82dd-2461-43fe-994a-71a6fb22cc9a'' moved to trashcan
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:619f82dd-2461-43fe-994a-71a6fb22cc9a, vol_name:cephfs) < ""
Jan 21 14:18:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "format": "json"}]: dispatch
Jan 21 14:18:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:18:20 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "format": "json"}]: dispatch
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:21.104+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6e6fe01-b413-4bf6-b249-91dc19a3e3fc' of type subvolume
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6e6fe01-b413-4bf6-b249-91dc19a3e3fc' of type subvolume
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d6e6fe01-b413-4bf6-b249-91dc19a3e3fc'' moved to trashcan
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6e6fe01-b413-4bf6-b249-91dc19a3e3fc, vol_name:cephfs) < ""
Jan 21 14:18:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:21 compute-0 ceph-mon[75031]: pgmap v1148: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 99 KiB/s wr, 10 op/s
Jan 21 14:18:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "format": "json"}]: dispatch
Jan 21 14:18:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "619f82dd-2461-43fe-994a-71a6fb22cc9a", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 61 KiB/s wr, 6 op/s
Jan 21 14:18:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b", "format": "json"}]: dispatch
Jan 21 14:18:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "format": "json"}]: dispatch
Jan 21 14:18:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d6e6fe01-b413-4bf6-b249-91dc19a3e3fc", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:23 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:18:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2885507471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:18:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:18:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2885507471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:18:24 compute-0 ceph-mon[75031]: pgmap v1149: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 61 KiB/s wr, 6 op/s
Jan 21 14:18:24 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b", "format": "json"}]: dispatch
Jan 21 14:18:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2885507471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:18:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2885507471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:18:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:24 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/d5ba67b3-4905-4db0-80f8-8f636f4190ab'.
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/.meta.tmp'
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/.meta.tmp' to config b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89/.meta'
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "format": "json"}]: dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:24 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/c6b7d6f4-455b-41b1-b245-1cc7021ed1f6'.
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/.meta.tmp'
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/.meta.tmp' to config b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b/.meta'
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "format": "json"}]: dispatch
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 14:18:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:24 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "format": "json"}]: dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: pgmap v1150: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 62 KiB/s wr, 8 op/s
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "format": "json"}]: dispatch
Jan 21 14:18:25 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 121 KiB/s wr, 12 op/s
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp'
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp' to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta'
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:27 compute-0 ceph-mon[75031]: pgmap v1151: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 121 KiB/s wr, 12 op/s
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp'
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta.tmp' to config b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0/.meta'
Jan 21 14:18:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:325147df-76fa-4b90-9267-80d02dee5e0b, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 10 op/s
Jan 21 14:18:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Jan 21 14:18:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Jan 21 14:18:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:18:28 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:28 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b_beb041ba-4500-43bc-91c9-794ff68a2025", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "snap_name": "325147df-76fa-4b90-9267-80d02dee5e0b", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Jan 21 14:18:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Jan 21 14:18:28 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6b95bf55-eb7d-43c0-9f25-36884d529a89' of type subvolume
Jan 21 14:18:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:28.786+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6b95bf55-eb7d-43c0-9f25-36884d529a89' of type subvolume
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6b95bf55-eb7d-43c0-9f25-36884d529a89'' moved to trashcan
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6b95bf55-eb7d-43c0-9f25-36884d529a89, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:28.815+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1154fdb-2d56-4b79-b688-aa930d49c33b' of type subvolume
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1154fdb-2d56-4b79-b688-aa930d49c33b' of type subvolume
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1154fdb-2d56-4b79-b688-aa930d49c33b'' moved to trashcan
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:28 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1154fdb-2d56-4b79-b688-aa930d49c33b, vol_name:cephfs) < ""
Jan 21 14:18:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:29 compute-0 ceph-mon[75031]: pgmap v1152: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 10 op/s
Jan 21 14:18:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice", "format": "json"}]: dispatch
Jan 21 14:18:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "format": "json"}]: dispatch
Jan 21 14:18:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6b95bf55-eb7d-43c0-9f25-36884d529a89", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "format": "json"}]: dispatch
Jan 21 14:18:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1154fdb-2d56-4b79-b688-aa930d49c33b", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 147 KiB/s wr, 14 op/s
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "format": "json"}]: dispatch
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:30 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:30.841+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75787c7c-a801-4e74-8f54-f20d6b4880b0' of type subvolume
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75787c7c-a801-4e74-8f54-f20d6b4880b0' of type subvolume
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/75787c7c-a801-4e74-8f54-f20d6b4880b0'' moved to trashcan
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:30 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75787c7c-a801-4e74-8f54-f20d6b4880b0, vol_name:cephfs) < ""
Jan 21 14:18:31 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:18:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:18:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:31 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:31 compute-0 ceph-mon[75031]: pgmap v1153: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 147 KiB/s wr, 14 op/s
Jan 21 14:18:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "format": "json"}]: dispatch
Jan 21 14:18:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75787c7c-a801-4e74-8f54-f20d6b4880b0", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 21 14:18:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 21 14:18:32 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp'
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp' to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta'
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp'
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta.tmp' to config b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0/.meta'
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed4d40c5-f4bd-45ce-9692-e4bc79bb0372, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:18:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 137 KiB/s wr, 13 op/s
Jan 21 14:18:32 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:18:32 compute-0 ceph-mon[75031]: osdmap e148: 3 total, 3 up, 3 in
Jan 21 14:18:33 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372_73e67d86-f13b-459e-a543-f49785594c69", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:33 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "snap_name": "ed4d40c5-f4bd-45ce-9692-e4bc79bb0372", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:33 compute-0 ceph-mon[75031]: pgmap v1155: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 137 KiB/s wr, 13 op/s
Jan 21 14:18:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:18:33.909 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:18:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:18:33.910 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:18:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:18:33.910 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:18:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:18:34.364 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:18:34 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:18:34.366 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 137 KiB/s wr, 12 op/s
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:18:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:18:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:18:34 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:34 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:35 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:18:35.368 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:18:35 compute-0 ceph-mon[75031]: pgmap v1156: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 137 KiB/s wr, 12 op/s
Jan 21 14:18:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:18:35 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:18:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "format": "json"}]: dispatch
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fb09730-0544-4361-97f8-11e56000d2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fb09730-0544-4361-97f8-11e56000d2f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fb09730-0544-4361-97f8-11e56000d2f0' of type subvolume
Jan 21 14:18:35 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:18:35.937+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fb09730-0544-4361-97f8-11e56000d2f0' of type subvolume
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fb09730-0544-4361-97f8-11e56000d2f0'' moved to trashcan
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:18:35 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fb09730-0544-4361-97f8-11e56000d2f0, vol_name:cephfs) < ""
Jan 21 14:18:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 110 KiB/s wr, 12 op/s
Jan 21 14:18:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 21 14:18:36 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "format": "json"}]: dispatch
Jan 21 14:18:36 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fb09730-0544-4361-97f8-11e56000d2f0", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 21 14:18:36 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 21 14:18:37 compute-0 ceph-mon[75031]: pgmap v1157: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 110 KiB/s wr, 12 op/s
Jan 21 14:18:37 compute-0 ceph-mon[75031]: osdmap e149: 3 total, 3 up, 3 in
Jan 21 14:18:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 57 KiB/s wr, 8 op/s
Jan 21 14:18:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:18:38 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:38 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice_bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:39 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:39 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:18:39
Jan 21 14:18:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:18:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:18:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Jan 21 14:18:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:18:40 compute-0 ceph-mon[75031]: pgmap v1159: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 57 KiB/s wr, 8 op/s
Jan 21 14:18:40 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:40 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 140 KiB/s wr, 15 op/s
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:18:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:18:41 compute-0 ceph-mon[75031]: pgmap v1160: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 140 KiB/s wr, 15 op/s
Jan 21 14:18:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 21 14:18:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 21 14:18:41 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Jan 21 14:18:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Jan 21 14:18:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:18:42 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:42 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:42 compute-0 ceph-mon[75031]: osdmap e150: 3 total, 3 up, 3 in
Jan 21 14:18:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Jan 21 14:18:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Jan 21 14:18:42 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Jan 21 14:18:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 143 KiB/s wr, 14 op/s
Jan 21 14:18:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice_bob", "format": "json"}]: dispatch
Jan 21 14:18:43 compute-0 ceph-mon[75031]: pgmap v1162: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 143 KiB/s wr, 14 op/s
Jan 21 14:18:43 compute-0 nova_compute[239261]: 2026-01-21 14:18:43.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:44 compute-0 podman[251991]: 2026-01-21 14:18:44.340191458 +0000 UTC m=+0.056156775 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 14:18:44 compute-0 podman[251990]: 2026-01-21 14:18:44.36762195 +0000 UTC m=+0.087733904 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 14:18:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 14:18:45 compute-0 ceph-mon[75031]: pgmap v1163: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 14:18:45 compute-0 nova_compute[239261]: 2026-01-21 14:18:45.869 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 14:18:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:18:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:46 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:46 compute-0 nova_compute[239261]: 2026-01-21 14:18:46.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:46 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.752784) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126752828, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1310, "num_deletes": 252, "total_data_size": 1605822, "memory_usage": 1639048, "flush_reason": "Manual Compaction"}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126766778, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1576581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24834, "largest_seqno": 26143, "table_properties": {"data_size": 1570254, "index_size": 3338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15677, "raw_average_key_size": 21, "raw_value_size": 1556712, "raw_average_value_size": 2092, "num_data_blocks": 149, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769005052, "oldest_key_time": 1769005052, "file_creation_time": 1769005126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14057 microseconds, and 7093 cpu microseconds.
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.766838) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1576581 bytes OK
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.766862) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.770329) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.770355) EVENT_LOG_v1 {"time_micros": 1769005126770349, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.770376) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1599388, prev total WAL file size 1599388, number of live WAL files 2.
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.771143) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1539KB)], [56(10MB)]
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126771205, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12479558, "oldest_snapshot_seqno": -1}
Jan 21 14:18:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:46 compute-0 nova_compute[239261]: 2026-01-21 14:18:46.786 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:18:46 compute-0 nova_compute[239261]: 2026-01-21 14:18:46.786 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:18:46 compute-0 nova_compute[239261]: 2026-01-21 14:18:46.787 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:18:46 compute-0 nova_compute[239261]: 2026-01-21 14:18:46.787 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:18:46 compute-0 nova_compute[239261]: 2026-01-21 14:18:46.787 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5553 keys, 10672296 bytes, temperature: kUnknown
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126855859, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10672296, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10631179, "index_size": 26159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 138286, "raw_average_key_size": 24, "raw_value_size": 10527586, "raw_average_value_size": 1895, "num_data_blocks": 1091, "num_entries": 5553, "num_filter_entries": 5553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.856222) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10672296 bytes
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.857927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 6080, records dropped: 527 output_compression: NoCompression
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.857948) EVENT_LOG_v1 {"time_micros": 1769005126857938, "job": 30, "event": "compaction_finished", "compaction_time_micros": 84802, "compaction_time_cpu_micros": 32203, "output_level": 6, "num_output_files": 1, "total_output_size": 10672296, "num_input_records": 6080, "num_output_records": 5553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126858472, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005126861292, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.771035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:18:46 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:18:46.861406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:18:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:18:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867212393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.345 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.492 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.493 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.493 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.494 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:18:47 compute-0 ceph-mon[75031]: pgmap v1164: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 14:18:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:18:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:47 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2867212393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.757 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.757 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.871 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.966 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.967 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 14:18:47 compute-0 nova_compute[239261]: 2026-01-21 14:18:47.982 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.007 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.022 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:18:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 14:18:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:18:48 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541470220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.565 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.574 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.592 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.595 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:18:48 compute-0 nova_compute[239261]: 2026-01-21 14:18:48.596 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/0d39357a-8414-4374-b0d6-05a412ce9464'.
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "format": "json"}]: dispatch
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:18:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:49 compute-0 ceph-mon[75031]: pgmap v1165: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 14:18:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/541470220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:18:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.596 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.597 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.597 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.622 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:49 compute-0 nova_compute[239261]: 2026-01-21 14:18:49.726 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:18:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:49 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:49 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s wr, 5 op/s
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "format": "json"}]: dispatch
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:50 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666226090619191 of space, bias 1.0, pg target 0.1998678271857573 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004046650892931975 of space, bias 4.0, pg target 0.48559810715183704 quantized to 16 (current 16)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:18:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:18:50 compute-0 nova_compute[239261]: 2026-01-21 14:18:50.739 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:51 compute-0 nova_compute[239261]: 2026-01-21 14:18:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:51 compute-0 nova_compute[239261]: 2026-01-21 14:18:51.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:51 compute-0 ceph-mon[75031]: pgmap v1166: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s wr, 5 op/s
Jan 21 14:18:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 5 op/s
Jan 21 14:18:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54", "format": "json"}]: dispatch
Jan 21 14:18:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:53 compute-0 ceph-mon[75031]: pgmap v1167: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 5 op/s
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:54 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID alice bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:18:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:18:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:54 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5", "format": "json"}]: dispatch
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 5 op/s
Jan 21 14:18:54 compute-0 nova_compute[239261]: 2026-01-21 14:18:54.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54", "format": "json"}]: dispatch
Jan 21 14:18:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "r", "format": "json"}]: dispatch
Jan 21 14:18:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:18:54 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:18:55 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5", "format": "json"}]: dispatch
Jan 21 14:18:55 compute-0 ceph-mon[75031]: pgmap v1168: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 5 op/s
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:18:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 14:18:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5_3865f07e-c822-4316-8c79-2b1c82ad80c4", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:57 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "a8a4303d-8f0c-4d7c-83ea-8406b6d6fcc5", "force": true, "format": "json"}]: dispatch
Jan 21 14:18:57 compute-0 ceph-mon[75031]: pgmap v1169: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 88 KiB/s wr, 8 op/s
Jan 21 14:18:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 74 KiB/s wr, 7 op/s
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Jan 21 14:18:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Jan 21 14:18:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:59 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:18:59 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:18:59 compute-0 nova_compute[239261]: 2026-01-21 14:18:59.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:18:59 compute-0 nova_compute[239261]: 2026-01-21 14:18:59.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 21 14:18:59 compute-0 nova_compute[239261]: 2026-01-21 14:18:59.750 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 21 14:18:59 compute-0 ceph-mon[75031]: pgmap v1170: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 74 KiB/s wr, 7 op/s
Jan 21 14:18:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Jan 21 14:18:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Jan 21 14:18:59 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5", "format": "json"}]: dispatch
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:18:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 110 KiB/s wr, 10 op/s
Jan 21 14:19:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:19:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "alice bob", "format": "json"}]: dispatch
Jan 21 14:19:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5", "format": "json"}]: dispatch
Jan 21 14:19:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:01 compute-0 ceph-mon[75031]: pgmap v1171: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 110 KiB/s wr, 10 op/s
Jan 21 14:19:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 21 14:19:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 21 14:19:02 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 21 14:19:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 9 op/s
Jan 21 14:19:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:19:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:19:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 14:19:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:03 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: Creating meta for ID bob with tenant 7be9e3a0119b40f692133210ebe5f9a2
Jan 21 14:19:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} v 0)
Jan 21 14:19:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:19:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:19:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:19:03 compute-0 ceph-mon[75031]: osdmap e151: 3 total, 3 up, 3 in
Jan 21 14:19:03 compute-0 ceph-mon[75031]: pgmap v1173: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 9 op/s
Jan 21 14:19:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"} : dispatch
Jan 21 14:19:03 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b", "mon", "allow r"], "format": "json"}]': finished
Jan 21 14:19:04 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:19:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 9 op/s
Jan 21 14:19:05 compute-0 ceph-mon[75031]: pgmap v1174: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 9 op/s
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:05 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fda77d42-2af2-4751-8381-d4861d82e3b5, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 14:19:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5_882c1b0b-28af-4e8f-82ab-4260c6bbe2e1", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:06 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fda77d42-2af2-4751-8381-d4861d82e3b5", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57'.
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/.meta.tmp'
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/.meta.tmp' to config b'/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/.meta'
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "format": "json"}]: dispatch
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:19:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:19:07 compute-0 ceph-mon[75031]: pgmap v1175: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 14:19:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:19:08 compute-0 sudo[252082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:19:08 compute-0 sudo[252082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:08 compute-0 sudo[252082]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:08 compute-0 sudo[252107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:19:08 compute-0 sudo[252107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 14:19:08 compute-0 sudo[252107]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:19:09 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:19:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:19:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:19:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:19:09 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19", "format": "json"}]: dispatch
Jan 21 14:19:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:19:09 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "format": "json"}]: dispatch
Jan 21 14:19:09 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:19:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:19:09 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:19:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:19:09 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:19:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:19:09 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:19:09 compute-0 sudo[252164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:19:09 compute-0 sudo[252164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:09 compute-0 sudo[252164]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:09 compute-0 sudo[252189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:19:09 compute-0 sudo[252189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.074807138 +0000 UTC m=+0.041858010 container create ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 14:19:10 compute-0 systemd[1]: Started libpod-conmon-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope.
Jan 21 14:19:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.058469136 +0000 UTC m=+0.025520028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.16335376 +0000 UTC m=+0.130404662 container init ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.172878723 +0000 UTC m=+0.139929605 container start ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.176389906 +0000 UTC m=+0.143440798 container attach ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:19:10 compute-0 systemd[1]: libpod-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope: Deactivated successfully.
Jan 21 14:19:10 compute-0 cranky_wescoff[252242]: 167 167
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.183117112 +0000 UTC m=+0.150168004 container died ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:19:10 compute-0 conmon[252242]: conmon ec401cbc4e3f48133ba7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope/container/memory.events
Jan 21 14:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d261991db5eab680448f4653c44821a4b6d29c821b40fa57ed658ed9606f2c8-merged.mount: Deactivated successfully.
Jan 21 14:19:10 compute-0 podman[252226]: 2026-01-21 14:19:10.237913945 +0000 UTC m=+0.204964847 container remove ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:19:10 compute-0 systemd[1]: libpod-conmon-ec401cbc4e3f48133ba76dac71304a1c4f56cd33ae66cab7c694ae42c867c7e8.scope: Deactivated successfully.
Jan 21 14:19:10 compute-0 podman[252268]: 2026-01-21 14:19:10.435018646 +0000 UTC m=+0.045054885 container create 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:19:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 73 KiB/s wr, 7 op/s
Jan 21 14:19:10 compute-0 systemd[1]: Started libpod-conmon-6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2.scope.
Jan 21 14:19:10 compute-0 podman[252268]: 2026-01-21 14:19:10.414857085 +0000 UTC m=+0.024893354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:19:10 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:10 compute-0 podman[252268]: 2026-01-21 14:19:10.545419419 +0000 UTC m=+0.155455718 container init 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 14:19:10 compute-0 podman[252268]: 2026-01-21 14:19:10.562729635 +0000 UTC m=+0.172765904 container start 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:19:10 compute-0 podman[252268]: 2026-01-21 14:19:10.566927783 +0000 UTC m=+0.176964122 container attach 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:19:10 compute-0 ceph-mon[75031]: pgmap v1176: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19", "format": "json"}]: dispatch
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:19:10 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:19:11 compute-0 sad_sanderson[252284]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:19:11 compute-0 sad_sanderson[252284]: --> All data devices are unavailable
Jan 21 14:19:11 compute-0 systemd[1]: libpod-6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2.scope: Deactivated successfully.
Jan 21 14:19:11 compute-0 podman[252268]: 2026-01-21 14:19:11.108087244 +0000 UTC m=+0.718123483 container died 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 14:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-29dd2b4e723f2003a14a5da727c01e42a1c6e5700c02142b8f557233cd4c7217-merged.mount: Deactivated successfully.
Jan 21 14:19:11 compute-0 podman[252268]: 2026-01-21 14:19:11.152399521 +0000 UTC m=+0.762435780 container remove 6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sanderson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:19:11 compute-0 systemd[1]: libpod-conmon-6ee57f8681e0ded9778eed04f8650c7f48f448d97502756f7470e524d86a8eb2.scope: Deactivated successfully.
Jan 21 14:19:11 compute-0 sudo[252189]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:19:11 compute-0 sudo[252316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:19:11 compute-0 sudo[252316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:11 compute-0 sudo[252316]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:19:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 14:19:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]} v 0)
Jan 21 14:19:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]} : dispatch
Jan 21 14:19:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]}]': finished
Jan 21 14:19:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 14:19:11 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:11 compute-0 sudo[252341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:19:11 compute-0 sudo[252341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:11 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, tenant_id:7be9e3a0119b40f692133210ebe5f9a2, vol_name:cephfs) < ""
Jan 21 14:19:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 21 14:19:11 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 21 14:19:11 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 21 14:19:11 compute-0 ceph-mon[75031]: pgmap v1177: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 73 KiB/s wr, 7 op/s
Jan 21 14:19:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]} : dispatch
Jan 21 14:19:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575,allow rw path=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c707dfa3-0985-4b01-bd2d-86b20bf31443"]}]': finished
Jan 21 14:19:11 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:11 compute-0 ceph-mon[75031]: osdmap e152: 3 total, 3 up, 3 in
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.665116246 +0000 UTC m=+0.047856270 container create 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:19:11 compute-0 systemd[1]: Started libpod-conmon-866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c.scope.
Jan 21 14:19:11 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.648077908 +0000 UTC m=+0.030817952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.743542771 +0000 UTC m=+0.126282825 container init 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.754035356 +0000 UTC m=+0.136775390 container start 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.758797858 +0000 UTC m=+0.141538092 container attach 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:19:11 compute-0 great_curran[252396]: 167 167
Jan 21 14:19:11 compute-0 systemd[1]: libpod-866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c.scope: Deactivated successfully.
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.763230212 +0000 UTC m=+0.145970236 container died 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e35f387ad88eba1b4c7b2cde2f87c8ff8e1052d28b9d4a093f819539a74a6806-merged.mount: Deactivated successfully.
Jan 21 14:19:11 compute-0 podman[252379]: 2026-01-21 14:19:11.808927161 +0000 UTC m=+0.191667185 container remove 866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:19:11 compute-0 systemd[1]: libpod-conmon-866500a2a9d1d0c651d48122290bc0774ea6ad4eee101f3ed3373755accc430c.scope: Deactivated successfully.
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:12.009065144 +0000 UTC m=+0.059866952 container create 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:19:12 compute-0 systemd[1]: Started libpod-conmon-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope.
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:11.981245833 +0000 UTC m=+0.032047641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:19:12 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:12.107716441 +0000 UTC m=+0.158518299 container init 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:12.116233861 +0000 UTC m=+0.167035659 container start 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:12.120539141 +0000 UTC m=+0.171341029 container attach 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:19:12 compute-0 stupefied_keller[252437]: {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:     "0": [
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:         {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "devices": [
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "/dev/loop3"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             ],
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_name": "ceph_lv0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_size": "21470642176",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "name": "ceph_lv0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "tags": {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cluster_name": "ceph",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.crush_device_class": "",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.encrypted": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.objectstore": "bluestore",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osd_id": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.type": "block",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.vdo": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.with_tpm": "0"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             },
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "type": "block",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "vg_name": "ceph_vg0"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:         }
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:     ],
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:     "1": [
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:         {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "devices": [
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "/dev/loop4"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             ],
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_name": "ceph_lv1",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_size": "21470642176",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "name": "ceph_lv1",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "tags": {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cluster_name": "ceph",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.crush_device_class": "",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.encrypted": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.objectstore": "bluestore",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osd_id": "1",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.type": "block",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.vdo": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.with_tpm": "0"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             },
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "type": "block",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "vg_name": "ceph_vg1"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:         }
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:     ],
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:     "2": [
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:         {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "devices": [
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "/dev/loop5"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             ],
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_name": "ceph_lv2",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_size": "21470642176",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "name": "ceph_lv2",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "tags": {
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.cluster_name": "ceph",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.crush_device_class": "",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.encrypted": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.objectstore": "bluestore",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osd_id": "2",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.type": "block",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.vdo": "0",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:                 "ceph.with_tpm": "0"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             },
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "type": "block",
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:             "vg_name": "ceph_vg2"
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:         }
Jan 21 14:19:12 compute-0 stupefied_keller[252437]:     ]
Jan 21 14:19:12 compute-0 stupefied_keller[252437]: }
Jan 21 14:19:12 compute-0 systemd[1]: libpod-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope: Deactivated successfully.
Jan 21 14:19:12 compute-0 conmon[252437]: conmon 5da9708234c7a7c6c3f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope/container/memory.events
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:12.425330753 +0000 UTC m=+0.476132551 container died 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 21 14:19:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 21 14:19:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 21 14:19:12 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 21 14:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b82396bcdc37d95194b2313ee4ff8a8f7afc63c21b69a5c88c6fa7c51fb65378-merged.mount: Deactivated successfully.
Jan 21 14:19:12 compute-0 podman[252420]: 2026-01-21 14:19:12.474741028 +0000 UTC m=+0.525542866 container remove 5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 21 14:19:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 14:19:12 compute-0 systemd[1]: libpod-conmon-5da9708234c7a7c6c3f14f5cce44c7098299cba9eff2ee7e889e25234ff4f7cd.scope: Deactivated successfully.
Jan 21 14:19:12 compute-0 sudo[252341]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:12 compute-0 sudo[252457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:19:12 compute-0 sudo[252457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:12 compute-0 sudo[252457]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:12 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "tenant_id": "7be9e3a0119b40f692133210ebe5f9a2", "access_level": "rw", "format": "json"}]: dispatch
Jan 21 14:19:12 compute-0 ceph-mon[75031]: osdmap e153: 3 total, 3 up, 3 in
Jan 21 14:19:12 compute-0 sudo[252482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:19:12 compute-0 sudo[252482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:13.004520914 +0000 UTC m=+0.054299032 container create 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 14:19:13 compute-0 systemd[1]: Started libpod-conmon-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope.
Jan 21 14:19:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:12.979720573 +0000 UTC m=+0.029498671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:13.098540643 +0000 UTC m=+0.148318811 container init 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:13.107063453 +0000 UTC m=+0.156841551 container start 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:13.11123118 +0000 UTC m=+0.161009378 container attach 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 14:19:13 compute-0 eloquent_joliot[252536]: 167 167
Jan 21 14:19:13 compute-0 systemd[1]: libpod-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope: Deactivated successfully.
Jan 21 14:19:13 compute-0 conmon[252536]: conmon 7e9d8c6d849c486800be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope/container/memory.events
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:13.11594184 +0000 UTC m=+0.165719978 container died 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d5fd12b78a44c9364de32a6ca20a49c0a04138e0af7864cee08f73a58c862d6-merged.mount: Deactivated successfully.
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:13 compute-0 podman[252520]: 2026-01-21 14:19:13.165292195 +0000 UTC m=+0.215070293 container remove 7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:19:13 compute-0 systemd[1]: libpod-conmon-7e9d8c6d849c486800be63d6b8c9fc6676ac3833fa705356223d8825f43972da.scope: Deactivated successfully.
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:13 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fe9bc1b3-d5c9-4565-8fc7-bafb91560e19, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:13 compute-0 podman[252561]: 2026-01-21 14:19:13.360517583 +0000 UTC m=+0.053526193 container create ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:19:13 compute-0 systemd[1]: Started libpod-conmon-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope.
Jan 21 14:19:13 compute-0 podman[252561]: 2026-01-21 14:19:13.342274046 +0000 UTC m=+0.035282626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:19:13 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:19:13 compute-0 podman[252561]: 2026-01-21 14:19:13.470179728 +0000 UTC m=+0.163188358 container init ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 21 14:19:13 compute-0 podman[252561]: 2026-01-21 14:19:13.478573034 +0000 UTC m=+0.171581604 container start ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:19:13 compute-0 podman[252561]: 2026-01-21 14:19:13.482060447 +0000 UTC m=+0.175069067 container attach ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:19:13 compute-0 ceph-mon[75031]: pgmap v1180: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 91 KiB/s wr, 8 op/s
Jan 21 14:19:14 compute-0 lvm[252656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:19:14 compute-0 lvm[252655]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:19:14 compute-0 lvm[252656]: VG ceph_vg1 finished
Jan 21 14:19:14 compute-0 lvm[252655]: VG ceph_vg0 finished
Jan 21 14:19:14 compute-0 lvm[252658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:19:14 compute-0 lvm[252658]: VG ceph_vg2 finished
Jan 21 14:19:14 compute-0 cranky_chatterjee[252577]: {}
Jan 21 14:19:14 compute-0 systemd[1]: libpod-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope: Deactivated successfully.
Jan 21 14:19:14 compute-0 podman[252561]: 2026-01-21 14:19:14.343510711 +0000 UTC m=+1.036519291 container died ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:19:14 compute-0 systemd[1]: libpod-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope: Consumed 1.465s CPU time.
Jan 21 14:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab0ebb76426b003a4d696ef79010ef32384a82e5bdf7ccf62f4625ebecc434da-merged.mount: Deactivated successfully.
Jan 21 14:19:14 compute-0 podman[252561]: 2026-01-21 14:19:14.393666025 +0000 UTC m=+1.086674615 container remove ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 21 14:19:14 compute-0 systemd[1]: libpod-conmon-ddd82f7d2e0ac247b123ee7362b1f7e3efaf54e62bedca5fc5ac1fd6246ed92b.scope: Deactivated successfully.
Jan 21 14:19:14 compute-0 sudo[252482]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:19:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:19:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:19:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:19:14 compute-0 podman[252672]: 2026-01-21 14:19:14.478478219 +0000 UTC m=+0.070476770 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 5 op/s
Jan 21 14:19:14 compute-0 podman[252674]: 2026-01-21 14:19:14.514635965 +0000 UTC m=+0.106633026 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 21 14:19:14 compute-0 sudo[252713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:19:14 compute-0 sudo[252713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:19:14 compute-0 sudo[252713]: pam_unix(sudo:session): session closed for user root
Jan 21 14:19:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19_b761fcee-f754-4df5-8daa-e066bd13935b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "fe9bc1b3-d5c9-4565-8fc7-bafb91560e19", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:19:14 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 14:19:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]} v 0)
Jan 21 14:19:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]} : dispatch
Jan 21 14:19:14 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]}]': finished
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57
Jan 21 14:19:14 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/c707dfa3-0985-4b01-bd2d-86b20bf31443/626d5d30-99af-40b0-a0ee-52f501bcaa57],prefix=session evict} (starting...)
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:19:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c707dfa3-0985-4b01-bd2d-86b20bf31443, vol_name:cephfs) < ""
Jan 21 14:19:15 compute-0 ceph-mon[75031]: pgmap v1181: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 5 op/s
Jan 21 14:19:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]} : dispatch
Jan 21 14:19:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_424167b3-6c3d-4062-8da1-4d053af4cf7b"]}]': finished
Jan 21 14:19:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c707dfa3-0985-4b01-bd2d-86b20bf31443", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:16 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 105 KiB/s wr, 8 op/s
Jan 21 14:19:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1", "format": "json"}]: dispatch
Jan 21 14:19:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 21 14:19:17 compute-0 ceph-mon[75031]: pgmap v1182: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 105 KiB/s wr, 8 op/s
Jan 21 14:19:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1", "format": "json"}]: dispatch
Jan 21 14:19:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 21 14:19:17 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 4 op/s
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:19:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Jan 21 14:19:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Jan 21 14:19:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Jan 21 14:19:18 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575
Jan 21 14:19:18 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b/04464bce-b5c2-48d9-860a-5b8b6ce45575],prefix=session evict} (starting...)
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Jan 21 14:19:18 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:19:18 compute-0 ceph-mon[75031]: osdmap e154: 3 total, 3 up, 3 in
Jan 21 14:19:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Jan 21 14:19:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Jan 21 14:19:18 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Jan 21 14:19:19 compute-0 ceph-mon[75031]: pgmap v1184: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 4 op/s
Jan 21 14:19:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:19 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "auth_id": "bob", "format": "json"}]: dispatch
Jan 21 14:19:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 142 KiB/s wr, 11 op/s
Jan 21 14:19:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 21 14:19:21 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 21 14:19:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d0adc3df-76d3-4b70-bbe5-e57bff1140d1, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:21 compute-0 ceph-mon[75031]: pgmap v1185: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 142 KiB/s wr, 11 op/s
Jan 21 14:19:21 compute-0 ceph-mon[75031]: osdmap e155: 3 total, 3 up, 3 in
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 142 KiB/s wr, 10 op/s
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "format": "json"}]: dispatch
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:19:22 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:19:22.837+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '424167b3-6c3d-4062-8da1-4d053af4cf7b' of type subvolume
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '424167b3-6c3d-4062-8da1-4d053af4cf7b' of type subvolume
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/424167b3-6c3d-4062-8da1-4d053af4cf7b'' moved to trashcan
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:19:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:424167b3-6c3d-4062-8da1-4d053af4cf7b, vol_name:cephfs) < ""
Jan 21 14:19:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:19:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3330434979' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:19:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:19:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3330434979' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:19:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1_6730d26a-6aff-40af-a601-82d487d21c1b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:23 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "d0adc3df-76d3-4b70-bbe5-e57bff1140d1", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 89 KiB/s wr, 7 op/s
Jan 21 14:19:25 compute-0 ceph-mon[75031]: pgmap v1187: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 142 KiB/s wr, 10 op/s
Jan 21 14:19:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "format": "json"}]: dispatch
Jan 21 14:19:25 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "424167b3-6c3d-4062-8da1-4d053af4cf7b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:25 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3330434979' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:19:25 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3330434979' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:19:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 473 B/s rd, 129 KiB/s wr, 9 op/s
Jan 21 14:19:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 21 14:19:26 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 21 14:19:26 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 21 14:19:26 compute-0 ceph-mon[75031]: pgmap v1188: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 89 KiB/s wr, 7 op/s
Jan 21 14:19:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac", "format": "json"}]: dispatch
Jan 21 14:19:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:27 compute-0 ceph-mon[75031]: pgmap v1189: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 473 B/s rd, 129 KiB/s wr, 9 op/s
Jan 21 14:19:27 compute-0 ceph-mon[75031]: osdmap e156: 3 total, 3 up, 3 in
Jan 21 14:19:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 51 KiB/s wr, 3 op/s
Jan 21 14:19:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac", "format": "json"}]: dispatch
Jan 21 14:19:30 compute-0 ceph-mon[75031]: pgmap v1191: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 51 KiB/s wr, 3 op/s
Jan 21 14:19:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 463 B/s rd, 72 KiB/s wr, 5 op/s
Jan 21 14:19:31 compute-0 ceph-mon[75031]: pgmap v1192: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 463 B/s rd, 72 KiB/s wr, 5 op/s
Jan 21 14:19:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 21 14:19:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 21 14:19:31 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 21 14:19:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 5 op/s
Jan 21 14:19:32 compute-0 ceph-mon[75031]: osdmap e157: 3 total, 3 up, 3 in
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:737b967c-2386-438d-aa9d-7e9a039e9aac, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:33 compute-0 ceph-mon[75031]: pgmap v1194: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 79 KiB/s wr, 5 op/s
Jan 21 14:19:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:19:33.910 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:19:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:19:33.911 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:19:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:19:33.911 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:19:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 28 KiB/s wr, 2 op/s
Jan 21 14:19:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac_c69e25ee-cc9d-4429-8a2a-8711a855d3dd", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "737b967c-2386-438d-aa9d-7e9a039e9aac", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 214 B/s rd, 47 KiB/s wr, 4 op/s
Jan 21 14:19:36 compute-0 ceph-mon[75031]: pgmap v1195: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 28 KiB/s wr, 2 op/s
Jan 21 14:19:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 45 KiB/s wr, 3 op/s
Jan 21 14:19:39 compute-0 ceph-mon[75031]: pgmap v1196: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 214 B/s rd, 47 KiB/s wr, 4 op/s
Jan 21 14:19:39 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:19:39.456 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:19:39 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:19:39.458 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:19:39 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:19:39.459 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:19:39
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes']
Jan 21 14:19:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:19:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 2 op/s
Jan 21 14:19:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:19:41 compute-0 ceph-mon[75031]: pgmap v1197: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 45 KiB/s wr, 3 op/s
Jan 21 14:19:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54_50188602-9386-4740-9326-44acaedb4caa", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp'
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta.tmp' to config b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b/.meta'
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e9dfc6f5-6817-4818-8b7a-6638ecfd5d54, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:19:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:19:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 21 14:19:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 188 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 14:19:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 21 14:19:42 compute-0 ceph-mon[75031]: pgmap v1198: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 2 op/s
Jan 21 14:19:42 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "snap_name": "e9dfc6f5-6817-4818-8b7a-6638ecfd5d54", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:42 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "format": "json"}]: dispatch
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4fe02932-2d04-427d-b4f6-1c341396704b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4fe02932-2d04-427d-b4f6-1c341396704b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:19:43 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:19:43.136+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fe02932-2d04-427d-b4f6-1c341396704b' of type subvolume
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fe02932-2d04-427d-b4f6-1c341396704b' of type subvolume
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4fe02932-2d04-427d-b4f6-1c341396704b'' moved to trashcan
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:19:43 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fe02932-2d04-427d-b4f6-1c341396704b, vol_name:cephfs) < ""
Jan 21 14:19:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 21 14:19:43 compute-0 ceph-mon[75031]: pgmap v1199: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 188 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 14:19:43 compute-0 ceph-mon[75031]: osdmap e158: 3 total, 3 up, 3 in
Jan 21 14:19:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 21 14:19:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 21 14:19:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 1 op/s
Jan 21 14:19:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "format": "json"}]: dispatch
Jan 21 14:19:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4fe02932-2d04-427d-b4f6-1c341396704b", "force": true, "format": "json"}]: dispatch
Jan 21 14:19:44 compute-0 ceph-mon[75031]: osdmap e159: 3 total, 3 up, 3 in
Jan 21 14:19:45 compute-0 podman[252745]: 2026-01-21 14:19:45.363319963 +0000 UTC m=+0.082418399 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 21 14:19:45 compute-0 podman[252746]: 2026-01-21 14:19:45.364570983 +0000 UTC m=+0.076963612 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 14:19:45 compute-0 ceph-mon[75031]: pgmap v1202: 305 pgs: 305 active+clean; 69 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 1 op/s
Jan 21 14:19:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 63 KiB/s wr, 5 op/s
Jan 21 14:19:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 21 14:19:47 compute-0 nova_compute[239261]: 2026-01-21 14:19:47.750 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:47 compute-0 ceph-mon[75031]: pgmap v1203: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 63 KiB/s wr, 5 op/s
Jan 21 14:19:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 21 14:19:48 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 21 14:19:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 79 KiB/s wr, 5 op/s
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.741 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.742 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.768 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.769 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.769 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.769 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:19:48 compute-0 nova_compute[239261]: 2026-01-21 14:19:48.770 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:19:49 compute-0 ceph-mon[75031]: osdmap e160: 3 total, 3 up, 3 in
Jan 21 14:19:49 compute-0 ceph-mon[75031]: pgmap v1205: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 79 KiB/s wr, 5 op/s
Jan 21 14:19:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:19:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867554194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.358 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.518 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.519 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.520 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.520 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.582 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.583 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:19:49 compute-0 nova_compute[239261]: 2026-01-21 14:19:49.596 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:19:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:19:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32715997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:19:50 compute-0 nova_compute[239261]: 2026-01-21 14:19:50.230 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:19:50 compute-0 nova_compute[239261]: 2026-01-21 14:19:50.235 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:19:50 compute-0 nova_compute[239261]: 2026-01-21 14:19:50.250 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:19:50 compute-0 nova_compute[239261]: 2026-01-21 14:19:50.252 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:19:50 compute-0 nova_compute[239261]: 2026-01-21 14:19:50.252 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 62 KiB/s wr, 5 op/s
Jan 21 14:19:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/867554194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:19:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/32715997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662177847039849 of space, bias 1.0, pg target 0.19986533541119547 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004678936436856764 of space, bias 4.0, pg target 0.5614723724228117 quantized to 16 (current 16)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:19:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:19:51 compute-0 ceph-mon[75031]: pgmap v1206: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 529 B/s rd, 62 KiB/s wr, 5 op/s
Jan 21 14:19:52 compute-0 nova_compute[239261]: 2026-01-21 14:19:52.234 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:52 compute-0 nova_compute[239261]: 2026-01-21 14:19:52.235 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:52 compute-0 nova_compute[239261]: 2026-01-21 14:19:52.235 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:52 compute-0 nova_compute[239261]: 2026-01-21 14:19:52.235 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:19:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 21 14:19:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 469 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 14:19:52 compute-0 nova_compute[239261]: 2026-01-21 14:19:52.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:52 compute-0 nova_compute[239261]: 2026-01-21 14:19:52.790 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 21 14:19:52 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 21 14:19:53 compute-0 nova_compute[239261]: 2026-01-21 14:19:53.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:53 compute-0 ceph-mon[75031]: pgmap v1207: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 469 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 14:19:53 compute-0 ceph-mon[75031]: osdmap e161: 3 total, 3 up, 3 in
Jan 21 14:19:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 639 B/s wr, 1 op/s
Jan 21 14:19:54 compute-0 nova_compute[239261]: 2026-01-21 14:19:54.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:19:55 compute-0 ceph-mon[75031]: pgmap v1209: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 639 B/s wr, 1 op/s
Jan 21 14:19:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 253 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 14:19:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:19:57 compute-0 ceph-mon[75031]: pgmap v1210: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 253 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 14:19:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 1 op/s
Jan 21 14:19:59 compute-0 ceph-mon[75031]: pgmap v1211: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 15 KiB/s wr, 1 op/s
Jan 21 14:20:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 21 14:20:01 compute-0 anacron[216930]: Job `cron.daily' started
Jan 21 14:20:01 compute-0 anacron[216930]: Job `cron.daily' terminated
Jan 21 14:20:01 compute-0 ceph-mon[75031]: pgmap v1212: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 21 14:20:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 21 14:20:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:03 compute-0 ceph-mon[75031]: pgmap v1213: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Jan 21 14:20:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:20:06 compute-0 ceph-mon[75031]: pgmap v1214: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:20:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Jan 21 14:20:07 compute-0 ceph-mon[75031]: pgmap v1215: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Jan 21 14:20:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:09 compute-0 ceph-mon[75031]: pgmap v1216: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:20:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:20:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:20:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:20:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:20:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:20:11 compute-0 ceph-mon[75031]: pgmap v1217: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:13 compute-0 ceph-mon[75031]: pgmap v1218: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:14 compute-0 sudo[252836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:20:14 compute-0 sudo[252836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:14 compute-0 sudo[252836]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:14 compute-0 sudo[252861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:20:14 compute-0 sudo[252861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:15 compute-0 sudo[252861]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:20:15 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:20:15 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:20:15 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:20:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:20:15 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:20:15 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:20:15 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:20:15 compute-0 sudo[252916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:20:15 compute-0 sudo[252916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:15 compute-0 sudo[252916]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:15 compute-0 sudo[252941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:20:15 compute-0 sudo[252941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:15 compute-0 podman[252966]: 2026-01-21 14:20:15.490318761 +0000 UTC m=+0.054926056 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:20:15 compute-0 podman[252965]: 2026-01-21 14:20:15.537497045 +0000 UTC m=+0.104802643 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.723799494 +0000 UTC m=+0.061614282 container create 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:20:15 compute-0 systemd[1]: Started libpod-conmon-411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62.scope.
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.694016417 +0000 UTC m=+0.031831235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:20:15 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.826001895 +0000 UTC m=+0.163816723 container init 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.834366911 +0000 UTC m=+0.172181689 container start 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.838222391 +0000 UTC m=+0.176037179 container attach 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:20:15 compute-0 gracious_ganguly[253040]: 167 167
Jan 21 14:20:15 compute-0 systemd[1]: libpod-411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62.scope: Deactivated successfully.
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.840848242 +0000 UTC m=+0.178663020 container died 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ecaa4801fa46375682519307b2084c8ab1485fa2d55afab8760318869808d6-merged.mount: Deactivated successfully.
Jan 21 14:20:15 compute-0 podman[253023]: 2026-01-21 14:20:15.899355431 +0000 UTC m=+0.237170199 container remove 411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 21 14:20:15 compute-0 systemd[1]: libpod-conmon-411cced00061d57f5df7cd71868a3b338ff0d99e5a825f3a36838bf63de78d62.scope: Deactivated successfully.
Jan 21 14:20:15 compute-0 ceph-mon[75031]: pgmap v1219: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:20:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:20:15 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.110775048 +0000 UTC m=+0.048668350 container create 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:20:16 compute-0 systemd[1]: Started libpod-conmon-350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2.scope.
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.09293151 +0000 UTC m=+0.030824832 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:20:16 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.207091451 +0000 UTC m=+0.144984783 container init 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.216988403 +0000 UTC m=+0.154881705 container start 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.220465374 +0000 UTC m=+0.158358676 container attach 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:20:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:16 compute-0 jolly_swanson[253081]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:20:16 compute-0 jolly_swanson[253081]: --> All data devices are unavailable
Jan 21 14:20:16 compute-0 systemd[1]: libpod-350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2.scope: Deactivated successfully.
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.70073931 +0000 UTC m=+0.638632602 container died 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3896b4454d81c974cd605bf7942230bc3cb8a496a5074f3215765c7904fe85db-merged.mount: Deactivated successfully.
Jan 21 14:20:16 compute-0 podman[253065]: 2026-01-21 14:20:16.743398289 +0000 UTC m=+0.681291591 container remove 350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_swanson, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:20:16 compute-0 systemd[1]: libpod-conmon-350f74f2c70fa21b52b4db39847effa0fc6603f727db39d1e19596352b6644e2.scope: Deactivated successfully.
Jan 21 14:20:16 compute-0 sudo[252941]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:16 compute-0 sudo[253114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:20:16 compute-0 sudo[253114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:16 compute-0 sudo[253114]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:16 compute-0 sudo[253139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:20:16 compute-0 sudo[253139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:17 compute-0 podman[253175]: 2026-01-21 14:20:17.186821644 +0000 UTC m=+0.026844750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:20:17 compute-0 podman[253175]: 2026-01-21 14:20:17.538160823 +0000 UTC m=+0.378183839 container create b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:20:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:18 compute-0 systemd[1]: Started libpod-conmon-b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550.scope.
Jan 21 14:20:18 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:20:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:18 compute-0 ceph-mon[75031]: pgmap v1220: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Jan 21 14:20:18 compute-0 podman[253175]: 2026-01-21 14:20:18.648020511 +0000 UTC m=+1.488043617 container init b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:20:18 compute-0 podman[253175]: 2026-01-21 14:20:18.659970541 +0000 UTC m=+1.499993597 container start b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 14:20:18 compute-0 blissful_chandrasekhar[253192]: 167 167
Jan 21 14:20:18 compute-0 podman[253175]: 2026-01-21 14:20:18.666415551 +0000 UTC m=+1.506438607 container attach b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:20:18 compute-0 systemd[1]: libpod-b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550.scope: Deactivated successfully.
Jan 21 14:20:18 compute-0 podman[253197]: 2026-01-21 14:20:18.724335537 +0000 UTC m=+0.033789702 container died b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c5627c9912160aa68553b5565a684b9217f2136ea11be64b05258843771390-merged.mount: Deactivated successfully.
Jan 21 14:20:18 compute-0 podman[253197]: 2026-01-21 14:20:18.903973029 +0000 UTC m=+0.213427154 container remove b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 14:20:18 compute-0 systemd[1]: libpod-conmon-b448c8d861fcf5a779b2db80ca4e21604fdeb96e5ebdb5d4316fa2e381a01550.scope: Deactivated successfully.
Jan 21 14:20:19 compute-0 podman[253219]: 2026-01-21 14:20:19.102470524 +0000 UTC m=+0.043752766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:20:19 compute-0 podman[253219]: 2026-01-21 14:20:19.214017283 +0000 UTC m=+0.155299435 container create 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:20:19 compute-0 systemd[1]: Started libpod-conmon-3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b.scope.
Jan 21 14:20:19 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:19 compute-0 podman[253219]: 2026-01-21 14:20:19.375668976 +0000 UTC m=+0.316951138 container init 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:20:19 compute-0 podman[253219]: 2026-01-21 14:20:19.384245466 +0000 UTC m=+0.325527608 container start 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 21 14:20:19 compute-0 podman[253219]: 2026-01-21 14:20:19.436290793 +0000 UTC m=+0.377572965 container attach 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]: {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:     "0": [
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:         {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "devices": [
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "/dev/loop3"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             ],
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_name": "ceph_lv0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_size": "21470642176",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "name": "ceph_lv0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "tags": {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cluster_name": "ceph",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.crush_device_class": "",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.encrypted": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.objectstore": "bluestore",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osd_id": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.type": "block",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.vdo": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.with_tpm": "0"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             },
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "type": "block",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "vg_name": "ceph_vg0"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:         }
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:     ],
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:     "1": [
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:         {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "devices": [
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "/dev/loop4"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             ],
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_name": "ceph_lv1",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_size": "21470642176",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "name": "ceph_lv1",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "tags": {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cluster_name": "ceph",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.crush_device_class": "",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.encrypted": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.objectstore": "bluestore",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osd_id": "1",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.type": "block",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.vdo": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.with_tpm": "0"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             },
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "type": "block",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "vg_name": "ceph_vg1"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:         }
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:     ],
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:     "2": [
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:         {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "devices": [
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "/dev/loop5"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             ],
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_name": "ceph_lv2",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_size": "21470642176",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "name": "ceph_lv2",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "tags": {
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.cluster_name": "ceph",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.crush_device_class": "",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.encrypted": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.objectstore": "bluestore",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osd_id": "2",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.type": "block",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.vdo": "0",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:                 "ceph.with_tpm": "0"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             },
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "type": "block",
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:             "vg_name": "ceph_vg2"
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:         }
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]:     ]
Jan 21 14:20:19 compute-0 xenodochial_haibt[253236]: }
Jan 21 14:20:19 compute-0 ceph-mon[75031]: pgmap v1221: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:19 compute-0 systemd[1]: libpod-3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b.scope: Deactivated successfully.
Jan 21 14:20:19 compute-0 podman[253219]: 2026-01-21 14:20:19.714858811 +0000 UTC m=+0.656140973 container died 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-aba8a4720be0ec1347715e2c64186796e2051fe3a6b34dd7101ad6dd4bd53c2c-merged.mount: Deactivated successfully.
Jan 21 14:20:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:20 compute-0 podman[253219]: 2026-01-21 14:20:20.570301616 +0000 UTC m=+1.511583788 container remove 3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_haibt, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:20:20 compute-0 sudo[253139]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:20 compute-0 systemd[1]: libpod-conmon-3b770459ce90580d40132c64218f208c2649f5216100695c6b729a846711944b.scope: Deactivated successfully.
Jan 21 14:20:20 compute-0 sudo[253258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:20:20 compute-0 sudo[253258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:20 compute-0 sudo[253258]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:20 compute-0 sudo[253283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:20:20 compute-0 sudo[253283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:21 compute-0 podman[253319]: 2026-01-21 14:20:21.024780929 +0000 UTC m=+0.029931892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:20:21 compute-0 podman[253319]: 2026-01-21 14:20:21.355837135 +0000 UTC m=+0.360988098 container create c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 21 14:20:21 compute-0 systemd[1]: Started libpod-conmon-c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c.scope.
Jan 21 14:20:21 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:20:21 compute-0 podman[253319]: 2026-01-21 14:20:21.547656532 +0000 UTC m=+0.552807495 container init c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 21 14:20:21 compute-0 podman[253319]: 2026-01-21 14:20:21.553596721 +0000 UTC m=+0.558747704 container start c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 21 14:20:21 compute-0 affectionate_kalam[253335]: 167 167
Jan 21 14:20:21 compute-0 systemd[1]: libpod-c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c.scope: Deactivated successfully.
Jan 21 14:20:21 compute-0 podman[253319]: 2026-01-21 14:20:21.559212982 +0000 UTC m=+0.564363925 container attach c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:20:21 compute-0 podman[253319]: 2026-01-21 14:20:21.559881619 +0000 UTC m=+0.565032562 container died c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 21 14:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-208a0d5b1996ec7f575c705e7d28ee9cc519ffec635058a6e1c119c4f8531fb4-merged.mount: Deactivated successfully.
Jan 21 14:20:22 compute-0 podman[253319]: 2026-01-21 14:20:22.03439006 +0000 UTC m=+1.039540993 container remove c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:20:22 compute-0 ceph-mon[75031]: pgmap v1222: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:22 compute-0 systemd[1]: libpod-conmon-c4ae1ce8005f8e71762b3e22cec7d74b96ba58e808549571cba1205a4c6c480c.scope: Deactivated successfully.
Jan 21 14:20:22 compute-0 podman[253359]: 2026-01-21 14:20:22.177310244 +0000 UTC m=+0.020206934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:20:22 compute-0 podman[253359]: 2026-01-21 14:20:22.389372986 +0000 UTC m=+0.232269686 container create 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 14:20:22 compute-0 systemd[1]: Started libpod-conmon-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope.
Jan 21 14:20:22 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:20:22 compute-0 podman[253359]: 2026-01-21 14:20:22.487993433 +0000 UTC m=+0.330890123 container init 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:20:22 compute-0 podman[253359]: 2026-01-21 14:20:22.497316351 +0000 UTC m=+0.340213021 container start 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 14:20:22 compute-0 podman[253359]: 2026-01-21 14:20:22.501184341 +0000 UTC m=+0.344081041 container attach 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:20:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:20:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054083079' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:20:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:20:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054083079' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:20:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2054083079' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:20:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2054083079' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:20:23 compute-0 lvm[253454]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:20:23 compute-0 lvm[253454]: VG ceph_vg1 finished
Jan 21 14:20:23 compute-0 lvm[253453]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:20:23 compute-0 lvm[253453]: VG ceph_vg0 finished
Jan 21 14:20:23 compute-0 lvm[253456]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:20:23 compute-0 lvm[253456]: VG ceph_vg2 finished
Jan 21 14:20:23 compute-0 suspicious_liskov[253375]: {}
Jan 21 14:20:23 compute-0 systemd[1]: libpod-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope: Deactivated successfully.
Jan 21 14:20:23 compute-0 podman[253359]: 2026-01-21 14:20:23.342630138 +0000 UTC m=+1.185526808 container died 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:20:23 compute-0 systemd[1]: libpod-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope: Consumed 1.338s CPU time.
Jan 21 14:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fabc52710222d1ba64dc0223f5408c9b880deea210c5cd0a6f1d263020d7e985-merged.mount: Deactivated successfully.
Jan 21 14:20:23 compute-0 podman[253359]: 2026-01-21 14:20:23.39953049 +0000 UTC m=+1.242427200 container remove 415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_liskov, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:20:23 compute-0 systemd[1]: libpod-conmon-415f78b36df33ef36f0cadbeb78fc859670f43593c94e5c800668938d11255f9.scope: Deactivated successfully.
Jan 21 14:20:23 compute-0 sudo[253283]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:20:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:20:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:20:23 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:20:23 compute-0 sudo[253470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:20:23 compute-0 sudo[253470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:20:23 compute-0 sudo[253470]: pam_unix(sudo:session): session closed for user root
Jan 21 14:20:24 compute-0 ceph-mon[75031]: pgmap v1223: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:20:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:20:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:25 compute-0 ceph-mon[75031]: pgmap v1224: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:27 compute-0 ceph-mon[75031]: pgmap v1225: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/e838926d-ddbc-4be3-a09d-636b8eb3404d'.
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/.meta.tmp'
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/.meta.tmp' to config b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c/.meta'
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "format": "json"}]: dispatch
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 14:20:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 14:20:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:20:29 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:29 compute-0 ceph-mon[75031]: pgmap v1226: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:20:29 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s wr, 0 op/s
Jan 21 14:20:30 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:30 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "format": "json"}]: dispatch
Jan 21 14:20:31 compute-0 ceph-mon[75031]: pgmap v1227: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s wr, 0 op/s
Jan 21 14:20:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s wr, 0 op/s
Jan 21 14:20:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:33 compute-0 ceph-mon[75031]: pgmap v1228: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s wr, 0 op/s
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/a864419b-98a6-4b69-8083-22802883d427'.
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/.meta.tmp'
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/.meta.tmp' to config b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0/.meta'
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "format": "json"}]: dispatch
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 14:20:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 14:20:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:20:33 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:20:33.911 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:20:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:20:33.913 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:20:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:20:33.913 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:20:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Jan 21 14:20:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "format": "json"}]: dispatch
Jan 21 14:20:34 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:35 compute-0 ceph-mon[75031]: pgmap v1229: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Jan 21 14:20:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Jan 21 14:20:37 compute-0 ceph-mon[75031]: pgmap v1230: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Jan 21 14:20:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/1fa83db7-b63e-4998-96ec-6ffd93b788e9'.
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/.meta.tmp'
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/.meta.tmp' to config b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d/.meta'
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "format": "json"}]: dispatch
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 14:20:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 14:20:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:20:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Jan 21 14:20:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:38 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "format": "json"}]: dispatch
Jan 21 14:20:38 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:39 compute-0 ceph-mon[75031]: pgmap v1231: 305 pgs: 305 active+clean; 70 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Jan 21 14:20:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:20:39
Jan 21 14:20:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:20:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:20:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'images', 'default.rgw.log']
Jan 21 14:20:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:20:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 3 op/s
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:20:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:20:42 compute-0 ceph-mon[75031]: pgmap v1232: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 3 op/s
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/a31a2175-050a-460b-86dc-4703b9dd32ff'.
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/.meta.tmp'
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/.meta.tmp' to config b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354/.meta'
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "format": "json"}]: dispatch
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 14:20:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:20:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 2 op/s
Jan 21 14:20:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:20:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:20:44 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "format": "json"}]: dispatch
Jan 21 14:20:44 compute-0 ceph-mon[75031]: pgmap v1233: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 2 op/s
Jan 21 14:20:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 2 op/s
Jan 21 14:20:45 compute-0 ceph-mon[75031]: pgmap v1234: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 2 op/s
Jan 21 14:20:46 compute-0 podman[253496]: 2026-01-21 14:20:46.336421991 +0000 UTC m=+0.055981551 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 14:20:46 compute-0 podman[253495]: 2026-01-21 14:20:46.36972989 +0000 UTC m=+0.092841973 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 3 op/s
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "format": "json"}]: dispatch
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c2b5535e-c671-412e-8a2f-000a21e98354, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c2b5535e-c671-412e-8a2f-000a21e98354, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:46 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:46.870+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2b5535e-c671-412e-8a2f-000a21e98354' of type subvolume
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c2b5535e-c671-412e-8a2f-000a21e98354' of type subvolume
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c2b5535e-c671-412e-8a2f-000a21e98354'' moved to trashcan
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:20:46 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c2b5535e-c671-412e-8a2f-000a21e98354, vol_name:cephfs) < ""
Jan 21 14:20:47 compute-0 ceph-mon[75031]: pgmap v1235: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 3 op/s
Jan 21 14:20:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "format": "json"}]: dispatch
Jan 21 14:20:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c2b5535e-c671-412e-8a2f-000a21e98354", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:47 compute-0 nova_compute[239261]: 2026-01-21 14:20:47.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s wr, 2 op/s
Jan 21 14:20:49 compute-0 ceph-mon[75031]: pgmap v1236: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s wr, 2 op/s
Jan 21 14:20:49 compute-0 nova_compute[239261]: 2026-01-21 14:20:49.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:49 compute-0 nova_compute[239261]: 2026-01-21 14:20:49.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:20:49 compute-0 nova_compute[239261]: 2026-01-21 14:20:49.724 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:20:49 compute-0 nova_compute[239261]: 2026-01-21 14:20:49.743 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "format": "json"}]: dispatch
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:50 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:50.279+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4f159ef8-5ae2-4a2e-8475-13a3dd691b3d' of type subvolume
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4f159ef8-5ae2-4a2e-8475-13a3dd691b3d' of type subvolume
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4f159ef8-5ae2-4a2e-8475-13a3dd691b3d'' moved to trashcan
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4f159ef8-5ae2-4a2e-8475-13a3dd691b3d, vol_name:cephfs) < ""
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 4 op/s
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662144933880528 of space, bias 1.0, pg target 0.19986434801641584 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004876885647310164 of space, bias 4.0, pg target 0.5852262776772197 quantized to 16 (current 16)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:20:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.924 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:20:50 compute-0 nova_compute[239261]: 2026-01-21 14:20:50.925 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:20:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:20:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530373979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.464 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.621 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.622 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.622 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.623 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.691 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.691 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:20:51 compute-0 nova_compute[239261]: 2026-01-21 14:20:51.708 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:20:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "format": "json"}]: dispatch
Jan 21 14:20:52 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4f159ef8-5ae2-4a2e-8475-13a3dd691b3d", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:52 compute-0 ceph-mon[75031]: pgmap v1237: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 4 op/s
Jan 21 14:20:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/530373979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:20:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:20:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:20:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817285755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:20:52 compute-0 nova_compute[239261]: 2026-01-21 14:20:52.587 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:20:52 compute-0 nova_compute[239261]: 2026-01-21 14:20:52.596 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:20:52 compute-0 nova_compute[239261]: 2026-01-21 14:20:52.615 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:20:52 compute-0 nova_compute[239261]: 2026-01-21 14:20:52.616 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:20:52 compute-0 nova_compute[239261]: 2026-01-21 14:20:52.617 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:20:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:53 compute-0 ceph-mon[75031]: pgmap v1238: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:20:53 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1817285755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "format": "json"}]: dispatch
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:53 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:53.659+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05a1a08c-ad8e-48c0-9200-239aff8cbad0' of type subvolume
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05a1a08c-ad8e-48c0-9200-239aff8cbad0' of type subvolume
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05a1a08c-ad8e-48c0-9200-239aff8cbad0'' moved to trashcan
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:20:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05a1a08c-ad8e-48c0-9200-239aff8cbad0, vol_name:cephfs) < ""
Jan 21 14:20:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:20:54 compute-0 nova_compute[239261]: 2026-01-21 14:20:54.616 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:54 compute-0 nova_compute[239261]: 2026-01-21 14:20:54.617 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:54 compute-0 nova_compute[239261]: 2026-01-21 14:20:54.617 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:54 compute-0 nova_compute[239261]: 2026-01-21 14:20:54.617 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:20:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "format": "json"}]: dispatch
Jan 21 14:20:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05a1a08c-ad8e-48c0-9200-239aff8cbad0", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:54 compute-0 nova_compute[239261]: 2026-01-21 14:20:54.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:20:55 compute-0 ceph-mon[75031]: pgmap v1239: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:20:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 5 op/s
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "format": "json"}]: dispatch
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:378a9e2b-830b-4331-9f8d-cddced43a09c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:378a9e2b-830b-4331-9f8d-cddced43a09c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:20:57 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:20:57.095+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '378a9e2b-830b-4331-9f8d-cddced43a09c' of type subvolume
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '378a9e2b-830b-4331-9f8d-cddced43a09c' of type subvolume
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "force": true, "format": "json"}]: dispatch
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/378a9e2b-830b-4331-9f8d-cddced43a09c'' moved to trashcan
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:20:57 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:378a9e2b-830b-4331-9f8d-cddced43a09c, vol_name:cephfs) < ""
Jan 21 14:20:57 compute-0 ceph-mon[75031]: pgmap v1240: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 5 op/s
Jan 21 14:20:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:20:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 14:20:59 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "format": "json"}]: dispatch
Jan 21 14:20:59 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "378a9e2b-830b-4331-9f8d-cddced43a09c", "force": true, "format": "json"}]: dispatch
Jan 21 14:21:00 compute-0 ceph-mon[75031]: pgmap v1241: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 14:21:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 80 KiB/s wr, 6 op/s
Jan 21 14:21:01 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:21:01.933 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:21:01 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:21:01.935 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:21:02 compute-0 ceph-mon[75031]: pgmap v1242: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 80 KiB/s wr, 6 op/s
Jan 21 14:21:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:21:02 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:03 compute-0 ceph-mon[75031]: pgmap v1243: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:21:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:21:04 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:21:04.940 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:21:05 compute-0 ceph-mon[75031]: pgmap v1244: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:21:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:21:07 compute-0 ceph-mon[75031]: pgmap v1245: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:21:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 26 KiB/s wr, 3 op/s
Jan 21 14:21:09 compute-0 ceph-mon[75031]: pgmap v1246: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 26 KiB/s wr, 3 op/s
Jan 21 14:21:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 38 KiB/s wr, 3 op/s
Jan 21 14:21:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:21:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:21:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:21:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:21:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:21:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:21:11 compute-0 ceph-mon[75031]: pgmap v1247: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 38 KiB/s wr, 3 op/s
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:21:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/8cb7ae86-bfd3-4a18-836f-c9c7d266cc44'.
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "format": "json"}]: dispatch
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:21:12 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:21:12 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:21:12 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:21:14 compute-0 ceph-mon[75031]: pgmap v1248: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:21:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:21:14 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "format": "json"}]: dispatch
Jan 21 14:21:14 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:21:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 1 op/s
Jan 21 14:21:15 compute-0 ceph-mon[75031]: pgmap v1249: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 1 op/s
Jan 21 14:21:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "format": "json"}]: dispatch
Jan 21 14:21:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:21:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:21:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:21:17 compute-0 podman[253584]: 2026-01-21 14:21:17.334225914 +0000 UTC m=+0.057292701 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 21 14:21:17 compute-0 podman[253583]: 2026-01-21 14:21:17.36352382 +0000 UTC m=+0.086428033 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 21 14:21:17 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "format": "json"}]: dispatch
Jan 21 14:21:17 compute-0 ceph-mon[75031]: pgmap v1250: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:21:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:21:19 compute-0 ceph-mon[75031]: pgmap v1251: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "target_sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, target_sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/7101b50f-5fdb-4753-a2d7-62f31ffe88aa'.
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 83049bda-88bb-4dcc-9f15-3d09e73d4771 for path b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, target_sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.067+0000 7fc51ae5e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 1837d3d1-766d-46d2-bd38-bb850ab9ec75)
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:20.088+0000 7fc51be60640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 1837d3d1-766d-46d2-bd38-bb850ab9ec75) -- by 0 seconds
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 14:21:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 1 op/s
Jan 21 14:21:20 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "target_sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:21:21 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.tnwklj(active, since 36m)
Jan 21 14:21:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:21:21.073+0000 7fc4f4303640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.snap/717877ed-ee59-4b6f-a8b8-a5e824a0e143/8cb7ae86-bfd3-4a18-836f-c9c7d266cc44' to b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/7101b50f-5fdb-4753-a2d7-62f31ffe88aa'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.clone_index] untracking 83049bda-88bb-4dcc-9f15-3d09e73d4771
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta.tmp' to config b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75/.meta'
Jan 21 14:21:21 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 1837d3d1-766d-46d2-bd38-bb850ab9ec75)
Jan 21 14:21:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:21:21 compute-0 ceph-mon[75031]: pgmap v1252: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 1 op/s
Jan 21 14:21:21 compute-0 ceph-mon[75031]: mgrmap e18: compute-0.tnwklj(active, since 36m)
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7fc5286a2bb0>
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: [progress INFO root] Writing back 19 completed events
Jan 21 14:21:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 21 14:21:22 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 21 14:21:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:21:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344732762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:21:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:21:22 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344732762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:21:23 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:23 compute-0 ceph-mon[75031]: pgmap v1253: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Jan 21 14:21:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1344732762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:21:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1344732762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:21:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.tnwklj(active, since 36m)
Jan 21 14:21:23 compute-0 sudo[253663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:21:23 compute-0 sudo[253663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:23 compute-0 sudo[253663]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:23 compute-0 sudo[253688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:21:23 compute-0 sudo[253688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:24 compute-0 sudo[253688]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mgrmap e19: compute-0.tnwklj(active, since 36m)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:21:24 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:21:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:21:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:21:24 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:21:24 compute-0 sudo[253744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:21:24 compute-0 sudo[253744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:24 compute-0 sudo[253744]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:24 compute-0 sudo[253769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:21:24 compute-0 sudo[253769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:25 compute-0 podman[253806]: 2026-01-21 14:21:24.971843637 +0000 UTC m=+0.026960753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:21:25 compute-0 podman[253806]: 2026-01-21 14:21:25.719771106 +0000 UTC m=+0.774888252 container create ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 21 14:21:25 compute-0 systemd[1]: Started libpod-conmon-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope.
Jan 21 14:21:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:21:25 compute-0 ceph-mon[75031]: pgmap v1254: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 19 KiB/s wr, 2 op/s
Jan 21 14:21:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:21:25 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:21:25 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:21:25 compute-0 podman[253806]: 2026-01-21 14:21:25.968845823 +0000 UTC m=+1.023962949 container init ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 21 14:21:25 compute-0 podman[253806]: 2026-01-21 14:21:25.975616671 +0000 UTC m=+1.030733777 container start ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 14:21:25 compute-0 elastic_kowalevski[253822]: 167 167
Jan 21 14:21:25 compute-0 systemd[1]: libpod-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope: Deactivated successfully.
Jan 21 14:21:25 compute-0 conmon[253822]: conmon ec769da8333c68ce5487 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope/container/memory.events
Jan 21 14:21:25 compute-0 podman[253806]: 2026-01-21 14:21:25.996231603 +0000 UTC m=+1.051348709 container attach ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 21 14:21:25 compute-0 podman[253806]: 2026-01-21 14:21:25.997091094 +0000 UTC m=+1.052208200 container died ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 21 14:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-eabbefac1d69a04ed01c1a1ba0d99f23e60acfefca12c1c100ba6b6ffcb381ff-merged.mount: Deactivated successfully.
Jan 21 14:21:26 compute-0 podman[253806]: 2026-01-21 14:21:26.26664268 +0000 UTC m=+1.321759786 container remove ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:21:26 compute-0 systemd[1]: libpod-conmon-ec769da8333c68ce548774f2b5a09fb709ab332ee256eaa77bb6d81f6ee258a5.scope: Deactivated successfully.
Jan 21 14:21:26 compute-0 podman[253847]: 2026-01-21 14:21:26.448835703 +0000 UTC m=+0.031363455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:21:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 6 op/s
Jan 21 14:21:26 compute-0 podman[253847]: 2026-01-21 14:21:26.574078453 +0000 UTC m=+0.156606195 container create 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:21:26 compute-0 systemd[1]: Started libpod-conmon-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope.
Jan 21 14:21:26 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:26 compute-0 podman[253847]: 2026-01-21 14:21:26.739433992 +0000 UTC m=+0.321961794 container init 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:21:26 compute-0 podman[253847]: 2026-01-21 14:21:26.747306776 +0000 UTC m=+0.329834508 container start 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 14:21:26 compute-0 podman[253847]: 2026-01-21 14:21:26.750526051 +0000 UTC m=+0.333053803 container attach 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:21:27 compute-0 jolly_torvalds[253863]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:21:27 compute-0 jolly_torvalds[253863]: --> All data devices are unavailable
Jan 21 14:21:27 compute-0 systemd[1]: libpod-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope: Deactivated successfully.
Jan 21 14:21:27 compute-0 conmon[253863]: conmon 113798a1c35012c35eac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope/container/memory.events
Jan 21 14:21:27 compute-0 podman[253847]: 2026-01-21 14:21:27.230750127 +0000 UTC m=+0.813277889 container died 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a6826faad0c687424bdf7083f0a4d63e93bfa8c06d8514aa12dc33f905204d-merged.mount: Deactivated successfully.
Jan 21 14:21:27 compute-0 podman[253847]: 2026-01-21 14:21:27.2932478 +0000 UTC m=+0.875775532 container remove 113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 21 14:21:27 compute-0 systemd[1]: libpod-conmon-113798a1c35012c35eacb03db64c0f5441ac3ed735c2cd1602e4bee480de901a.scope: Deactivated successfully.
Jan 21 14:21:27 compute-0 sudo[253769]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:27 compute-0 sudo[253897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:21:27 compute-0 sudo[253897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:27 compute-0 sudo[253897]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:27 compute-0 sudo[253922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:21:27 compute-0 sudo[253922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.72844099 +0000 UTC m=+0.041382928 container create d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:21:27 compute-0 systemd[1]: Started libpod-conmon-d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693.scope.
Jan 21 14:21:27 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.710480461 +0000 UTC m=+0.023422419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.812278322 +0000 UTC m=+0.125220270 container init d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.820210118 +0000 UTC m=+0.133152056 container start d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.824145561 +0000 UTC m=+0.137087519 container attach d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 21 14:21:27 compute-0 clever_shtern[253975]: 167 167
Jan 21 14:21:27 compute-0 systemd[1]: libpod-d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693.scope: Deactivated successfully.
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.826339661 +0000 UTC m=+0.139281599 container died d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 21 14:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-261ef37cc57a32bf4c6d2a93580de36e39753cfbd63e0eb2bae553b038d9f4b7-merged.mount: Deactivated successfully.
Jan 21 14:21:27 compute-0 podman[253959]: 2026-01-21 14:21:27.862822965 +0000 UTC m=+0.175764903 container remove d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:21:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:27 compute-0 ceph-mon[75031]: pgmap v1255: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 6 op/s
Jan 21 14:21:27 compute-0 systemd[1]: libpod-conmon-d5b3a62cb3fb017e4125f474d2250084fde56728353e68f861d449ddf8394693.scope: Deactivated successfully.
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.021252483 +0000 UTC m=+0.038536323 container create 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:21:28 compute-0 systemd[1]: Started libpod-conmon-20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8.scope.
Jan 21 14:21:28 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.004726596 +0000 UTC m=+0.022010456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.104409608 +0000 UTC m=+0.121693478 container init 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.111230158 +0000 UTC m=+0.128513998 container start 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.114915173 +0000 UTC m=+0.132199023 container attach 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]: {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:     "0": [
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:         {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "devices": [
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "/dev/loop3"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             ],
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_name": "ceph_lv0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_size": "21470642176",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "name": "ceph_lv0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "tags": {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cluster_name": "ceph",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.crush_device_class": "",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.encrypted": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.objectstore": "bluestore",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osd_id": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.type": "block",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.vdo": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.with_tpm": "0"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             },
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "type": "block",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "vg_name": "ceph_vg0"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:         }
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:     ],
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:     "1": [
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:         {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "devices": [
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "/dev/loop4"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             ],
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_name": "ceph_lv1",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_size": "21470642176",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "name": "ceph_lv1",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "tags": {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cluster_name": "ceph",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.crush_device_class": "",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.encrypted": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.objectstore": "bluestore",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osd_id": "1",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.type": "block",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.vdo": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.with_tpm": "0"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             },
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "type": "block",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "vg_name": "ceph_vg1"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:         }
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:     ],
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:     "2": [
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:         {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "devices": [
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "/dev/loop5"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             ],
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_name": "ceph_lv2",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_size": "21470642176",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "name": "ceph_lv2",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "tags": {
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.cluster_name": "ceph",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.crush_device_class": "",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.encrypted": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.objectstore": "bluestore",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osd_id": "2",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.type": "block",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.vdo": "0",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:                 "ceph.with_tpm": "0"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             },
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "type": "block",
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:             "vg_name": "ceph_vg2"
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:         }
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]:     ]
Jan 21 14:21:28 compute-0 strange_heisenberg[254015]: }
Jan 21 14:21:28 compute-0 systemd[1]: libpod-20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8.scope: Deactivated successfully.
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.402016911 +0000 UTC m=+0.419300761 container died 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-370ccf9210e66484ce7a5d61c2779b5e5dd036fc763113a230a84d181793cae2-merged.mount: Deactivated successfully.
Jan 21 14:21:28 compute-0 podman[253998]: 2026-01-21 14:21:28.441801203 +0000 UTC m=+0.459085043 container remove 20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 21 14:21:28 compute-0 systemd[1]: libpod-conmon-20bf7380dbf51a145fac0e4cb2c30fca92bc4c73d383344dee66c8027e185cc8.scope: Deactivated successfully.
Jan 21 14:21:28 compute-0 sudo[253922]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:28 compute-0 sudo[254035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:21:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 14:21:28 compute-0 sudo[254035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:28 compute-0 sudo[254035]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:28 compute-0 sudo[254060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:21:28 compute-0 sudo[254060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:28 compute-0 podman[254098]: 2026-01-21 14:21:28.99005799 +0000 UTC m=+0.114854068 container create 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 21 14:21:28 compute-0 podman[254098]: 2026-01-21 14:21:28.897137026 +0000 UTC m=+0.021933134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:21:29 compute-0 systemd[1]: Started libpod-conmon-4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94.scope.
Jan 21 14:21:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:21:29 compute-0 podman[254098]: 2026-01-21 14:21:29.091038713 +0000 UTC m=+0.215834791 container init 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:21:29 compute-0 podman[254098]: 2026-01-21 14:21:29.099316856 +0000 UTC m=+0.224112934 container start 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:21:29 compute-0 jolly_mahavira[254115]: 167 167
Jan 21 14:21:29 compute-0 systemd[1]: libpod-4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94.scope: Deactivated successfully.
Jan 21 14:21:29 compute-0 podman[254098]: 2026-01-21 14:21:29.105099992 +0000 UTC m=+0.229896090 container attach 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:21:29 compute-0 podman[254098]: 2026-01-21 14:21:29.105472231 +0000 UTC m=+0.230268309 container died 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-13b249de87c300435c9248158d9a7af734e5f94184ea1ea19998b6265dcb7ce5-merged.mount: Deactivated successfully.
Jan 21 14:21:29 compute-0 podman[254098]: 2026-01-21 14:21:29.145354364 +0000 UTC m=+0.270150482 container remove 4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 21 14:21:29 compute-0 systemd[1]: libpod-conmon-4ae36e9442ca77bb265ef3595824c47898eca17bb82a3040295b9721ed119f94.scope: Deactivated successfully.
Jan 21 14:21:29 compute-0 podman[254140]: 2026-01-21 14:21:29.309066144 +0000 UTC m=+0.044050251 container create bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 21 14:21:29 compute-0 systemd[1]: Started libpod-conmon-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope.
Jan 21 14:21:29 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:21:29 compute-0 podman[254140]: 2026-01-21 14:21:29.288974024 +0000 UTC m=+0.023958211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:21:29 compute-0 podman[254140]: 2026-01-21 14:21:29.40594977 +0000 UTC m=+0.140933907 container init bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 14:21:29 compute-0 podman[254140]: 2026-01-21 14:21:29.411678806 +0000 UTC m=+0.146662913 container start bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:21:29 compute-0 podman[254140]: 2026-01-21 14:21:29.414923201 +0000 UTC m=+0.149907328 container attach bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 21 14:21:29 compute-0 ceph-mon[75031]: pgmap v1256: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 14:21:30 compute-0 lvm[254235]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:21:30 compute-0 lvm[254234]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:21:30 compute-0 lvm[254234]: VG ceph_vg0 finished
Jan 21 14:21:30 compute-0 lvm[254235]: VG ceph_vg1 finished
Jan 21 14:21:30 compute-0 lvm[254237]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:21:30 compute-0 lvm[254237]: VG ceph_vg2 finished
Jan 21 14:21:30 compute-0 lvm[254239]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:21:30 compute-0 lvm[254239]: VG ceph_vg2 finished
Jan 21 14:21:30 compute-0 hungry_joliot[254156]: {}
Jan 21 14:21:30 compute-0 lvm[254241]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:21:30 compute-0 lvm[254241]: VG ceph_vg2 finished
Jan 21 14:21:30 compute-0 systemd[1]: libpod-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope: Deactivated successfully.
Jan 21 14:21:30 compute-0 systemd[1]: libpod-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope: Consumed 1.334s CPU time.
Jan 21 14:21:30 compute-0 podman[254140]: 2026-01-21 14:21:30.235274355 +0000 UTC m=+0.970258482 container died bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3526866c9da0426b9a4483b0c33e2779abf591f3486bac47f8648341ef83f052-merged.mount: Deactivated successfully.
Jan 21 14:21:30 compute-0 podman[254140]: 2026-01-21 14:21:30.275367032 +0000 UTC m=+1.010351139 container remove bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 14:21:30 compute-0 systemd[1]: libpod-conmon-bbe102f5870c334bfc89233c2120b84409e5ab5192fe60a2bbef0b4e2632b9e7.scope: Deactivated successfully.
Jan 21 14:21:30 compute-0 sudo[254060]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:21:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:30 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:21:30 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:30 compute-0 sudo[254254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:21:30 compute-0 sudo[254254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:21:30 compute-0 sudo[254254]: pam_unix(sudo:session): session closed for user root
Jan 21 14:21:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 50 KiB/s wr, 6 op/s
Jan 21 14:21:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:21:31 compute-0 ceph-mon[75031]: pgmap v1257: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 50 KiB/s wr, 6 op/s
Jan 21 14:21:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 21 14:21:32 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:33 compute-0 ceph-mon[75031]: pgmap v1258: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 21 14:21:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:21:33.913 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:21:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:21:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:21:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:21:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:21:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 21 14:21:35 compute-0 ceph-mon[75031]: pgmap v1259: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 40 KiB/s wr, 5 op/s
Jan 21 14:21:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 40 KiB/s wr, 4 op/s
Jan 21 14:21:37 compute-0 ceph-mon[75031]: pgmap v1260: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 40 KiB/s wr, 4 op/s
Jan 21 14:21:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:21:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:21:39
Jan 21 14:21:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:21:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:21:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr']
Jan 21 14:21:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:21:40 compute-0 ceph-mon[75031]: pgmap v1261: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:21:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:21:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:21:41 compute-0 ceph-mon[75031]: pgmap v1262: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:21:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:43 compute-0 ceph-mon[75031]: pgmap v1263: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:45 compute-0 ceph-mon[75031]: pgmap v1264: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:47 compute-0 ceph-mon[75031]: pgmap v1265: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:48 compute-0 podman[254280]: 2026-01-21 14:21:48.339331636 +0000 UTC m=+0.057957407 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 21 14:21:48 compute-0 podman[254279]: 2026-01-21 14:21:48.390495643 +0000 UTC m=+0.107424115 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 21 14:21:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:49 compute-0 ceph-mon[75031]: pgmap v1266: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:49 compute-0 nova_compute[239261]: 2026-01-21 14:21:49.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:49 compute-0 nova_compute[239261]: 2026-01-21 14:21:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:21:49 compute-0 nova_compute[239261]: 2026-01-21 14:21:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:21:49 compute-0 nova_compute[239261]: 2026-01-21 14:21:49.743 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:21:49 compute-0 nova_compute[239261]: 2026-01-21 14:21:49.743 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:50 compute-0 nova_compute[239261]: 2026-01-21 14:21:50.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:50 compute-0 nova_compute[239261]: 2026-01-21 14:21:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662144933880528 of space, bias 1.0, pg target 0.19986434801641584 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000509297280306306 of space, bias 4.0, pg target 0.6111567363675672 quantized to 16 (current 16)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:21:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:21:51 compute-0 ceph-mon[75031]: pgmap v1267: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:51 compute-0 nova_compute[239261]: 2026-01-21 14:21:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:51 compute-0 nova_compute[239261]: 2026-01-21 14:21:51.809 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:21:51 compute-0 nova_compute[239261]: 2026-01-21 14:21:51.810 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:21:51 compute-0 nova_compute[239261]: 2026-01-21 14:21:51.810 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:21:51 compute-0 nova_compute[239261]: 2026-01-21 14:21:51.810 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:21:51 compute-0 nova_compute[239261]: 2026-01-21 14:21:51.811 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:21:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:21:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337496685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:21:52 compute-0 nova_compute[239261]: 2026-01-21 14:21:52.310 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:21:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:53 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/337496685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.525 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.526 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5003MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.527 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.527 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.596 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.596 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:21:53 compute-0 nova_compute[239261]: 2026-01-21 14:21:53.622 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:21:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:21:54 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159399189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:21:54 compute-0 nova_compute[239261]: 2026-01-21 14:21:54.131 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:21:54 compute-0 nova_compute[239261]: 2026-01-21 14:21:54.140 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:21:54 compute-0 nova_compute[239261]: 2026-01-21 14:21:54.160 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:21:54 compute-0 nova_compute[239261]: 2026-01-21 14:21:54.163 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:21:54 compute-0 nova_compute[239261]: 2026-01-21 14:21:54.163 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:21:54 compute-0 ceph-mon[75031]: pgmap v1268: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:54 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2159399189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:21:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:55 compute-0 ceph-mon[75031]: pgmap v1269: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:56 compute-0 nova_compute[239261]: 2026-01-21 14:21:56.159 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:56 compute-0 nova_compute[239261]: 2026-01-21 14:21:56.177 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:56 compute-0 nova_compute[239261]: 2026-01-21 14:21:56.178 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:56 compute-0 nova_compute[239261]: 2026-01-21 14:21:56.178 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:56 compute-0 nova_compute[239261]: 2026-01-21 14:21:56.178 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:21:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:56 compute-0 nova_compute[239261]: 2026-01-21 14:21:56.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:21:57 compute-0 ceph-mon[75031]: pgmap v1270: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:21:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:21:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:00 compute-0 ceph-mon[75031]: pgmap v1271: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:01 compute-0 ceph-mon[75031]: pgmap v1272: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:03 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:22:03 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:04 compute-0 ceph-mon[75031]: pgmap v1273: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:05 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:22:05 compute-0 ceph-mon[75031]: pgmap v1274: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.153152) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326153227, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2453, "num_deletes": 507, "total_data_size": 3556250, "memory_usage": 3624616, "flush_reason": "Manual Compaction"}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326409268, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3482968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26144, "largest_seqno": 28596, "table_properties": {"data_size": 3472240, "index_size": 6390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 26695, "raw_average_key_size": 20, "raw_value_size": 3448236, "raw_average_value_size": 2654, "num_data_blocks": 282, "num_entries": 1299, "num_filter_entries": 1299, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769005127, "oldest_key_time": 1769005127, "file_creation_time": 1769005326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 256196 microseconds, and 9518 cpu microseconds.
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.409352) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3482968 bytes OK
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.409386) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.412287) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.412314) EVENT_LOG_v1 {"time_micros": 1769005326412307, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.412343) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3544728, prev total WAL file size 3544728, number of live WAL files 2.
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.413693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3401KB)], [59(10MB)]
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326413776, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14155264, "oldest_snapshot_seqno": -1}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5819 keys, 9104765 bytes, temperature: kUnknown
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326531672, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9104765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9065027, "index_size": 24076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 145410, "raw_average_key_size": 24, "raw_value_size": 8959881, "raw_average_value_size": 1539, "num_data_blocks": 991, "num_entries": 5819, "num_filter_entries": 5819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.532032) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9104765 bytes
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.533870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.0 rd, 77.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.7) write-amplify(2.6) OK, records in: 6852, records dropped: 1033 output_compression: NoCompression
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.533891) EVENT_LOG_v1 {"time_micros": 1769005326533881, "job": 32, "event": "compaction_finished", "compaction_time_micros": 117997, "compaction_time_cpu_micros": 28274, "output_level": 6, "num_output_files": 1, "total_output_size": 9104765, "num_input_records": 6852, "num_output_records": 5819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326534604, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005326536774, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.413518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:22:06 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:22:06.536936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:22:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 14:22:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:06 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:22:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 14:22:06 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 14:22:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:22:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:07 compute-0 ceph-mon[75031]: pgmap v1275: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 14:22:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:22:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 14:22:09 compute-0 ceph-mon[75031]: pgmap v1276: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/2bcde8f3-ae45-419c-921c-04b7d67afb7e'.
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/.meta.tmp'
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/.meta.tmp' to config b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576/.meta'
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "format": "json"}]: dispatch
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:10 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:22:10 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 14:22:10 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc547f763d0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ada2ca0>)]
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:22:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:22:11 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:22:11 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "format": "json"}]: dispatch
Jan 21 14:22:11 compute-0 ceph-mon[75031]: pgmap v1277: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 14:22:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 14:22:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:13 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.tnwklj(active, since 37m)
Jan 21 14:22:13 compute-0 ceph-mon[75031]: pgmap v1278: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 14:22:14 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 14:22:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:14 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 14:22:14 compute-0 ceph-mon[75031]: mgrmap e20: compute-0.tnwklj(active, since 37m)
Jan 21 14:22:15 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "new_size": 2147483648, "format": "json"}]: dispatch
Jan 21 14:22:15 compute-0 ceph-mon[75031]: pgmap v1279: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.6 KiB/s wr, 0 op/s
Jan 21 14:22:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 22 KiB/s wr, 1 op/s
Jan 21 14:22:17 compute-0 ceph-mon[75031]: pgmap v1280: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 22 KiB/s wr, 1 op/s
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "format": "json"}]: dispatch
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5be10ce2-edd5-48e2-9745-1baad2e01576, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5be10ce2-edd5-48e2-9745-1baad2e01576, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5be10ce2-edd5-48e2-9745-1baad2e01576' of type subvolume
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.853+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5be10ce2-edd5-48e2-9745-1baad2e01576' of type subvolume
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5be10ce2-edd5-48e2-9745-1baad2e01576'' moved to trashcan
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5be10ce2-edd5-48e2-9745-1baad2e01576, vol_name:cephfs) < ""
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.870+0000 7fc518659640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:17.893+0000 7fc517e58640 -1 client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:17 compute-0 ceph-mgr[75322]: client.0 error registering admin socket command: (17) File exists
Jan 21 14:22:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 22 KiB/s wr, 1 op/s
Jan 21 14:22:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "format": "json"}]: dispatch
Jan 21 14:22:18 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5be10ce2-edd5-48e2-9745-1baad2e01576", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:19 compute-0 podman[254392]: 2026-01-21 14:22:19.333710115 +0000 UTC m=+0.057632503 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:22:19 compute-0 podman[254391]: 2026-01-21 14:22:19.360899626 +0000 UTC m=+0.081780699 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 21 14:22:19 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.tnwklj(active, since 37m)
Jan 21 14:22:19 compute-0 ceph-mon[75031]: pgmap v1281: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 22 KiB/s wr, 1 op/s
Jan 21 14:22:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 2 op/s
Jan 21 14:22:20 compute-0 ceph-mon[75031]: mgrmap e21: compute-0.tnwklj(active, since 37m)
Jan 21 14:22:21 compute-0 ceph-mon[75031]: pgmap v1282: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 47 KiB/s wr, 2 op/s
Jan 21 14:22:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Jan 21 14:22:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:22:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436247533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:22:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:22:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3436247533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:22:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:24 compute-0 ceph-mon[75031]: pgmap v1283: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Jan 21 14:22:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3436247533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:22:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/3436247533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:22:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s wr, 2 op/s
Jan 21 14:22:25 compute-0 ceph-mon[75031]: pgmap v1284: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s wr, 2 op/s
Jan 21 14:22:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 14:22:27 compute-0 ceph-mon[75031]: pgmap v1285: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/49e6749a-b814-4d79-b2c3-87dadc89263e'.
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/.meta.tmp'
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/.meta.tmp' to config b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f/.meta'
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "format": "json"}]: dispatch
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:22:27 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 25 KiB/s wr, 2 op/s
Jan 21 14:22:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:22:28 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "format": "json"}]: dispatch
Jan 21 14:22:28 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:29 compute-0 ceph-mon[75031]: pgmap v1286: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 25 KiB/s wr, 2 op/s
Jan 21 14:22:30 compute-0 sudo[254434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:22:30 compute-0 sudo[254434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:30 compute-0 sudo[254434]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:30 compute-0 sudo[254459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:22:30 compute-0 sudo[254459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:22:31 compute-0 sudo[254459]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:31 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Jan 21 14:22:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:22:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:22:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:22:31 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:22:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:22:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:22:31 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:22:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:22:31 compute-0 sudo[254515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:22:31 compute-0 sudo[254515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:31 compute-0 sudo[254515]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:31 compute-0 ceph-mon[75031]: pgmap v1287: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:22:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:22:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:22:31 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:22:31 compute-0 sudo[254540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:22:31 compute-0 sudo[254540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.05379886 +0000 UTC m=+0.046420500 container create 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:32 compute-0 systemd[1]: Started libpod-conmon-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope.
Jan 21 14:22:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.030961755 +0000 UTC m=+0.023583415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.157309787 +0000 UTC m=+0.149931487 container init 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.171975714 +0000 UTC m=+0.164597354 container start 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:32 compute-0 inspiring_snyder[254594]: 167 167
Jan 21 14:22:32 compute-0 systemd[1]: libpod-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope: Deactivated successfully.
Jan 21 14:22:32 compute-0 conmon[254594]: conmon 9e8bb229b0ff2296a83a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope/container/memory.events
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.196294365 +0000 UTC m=+0.188916025 container attach 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.197772051 +0000 UTC m=+0.190393701 container died 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-550644e1a7b25e2e2a34467954cc5c59aa4d1d8d9d973cc8f2830a6559703d32-merged.mount: Deactivated successfully.
Jan 21 14:22:32 compute-0 podman[254578]: 2026-01-21 14:22:32.240811707 +0000 UTC m=+0.233433347 container remove 9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:22:32 compute-0 systemd[1]: libpod-conmon-9e8bb229b0ff2296a83ab3fc824d2bb3471334df740308d4f102c921ab7eeaf6.scope: Deactivated successfully.
Jan 21 14:22:32 compute-0 podman[254618]: 2026-01-21 14:22:32.447168474 +0000 UTC m=+0.069146811 container create 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 14:22:32 compute-0 systemd[1]: Started libpod-conmon-2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08.scope.
Jan 21 14:22:32 compute-0 podman[254618]: 2026-01-21 14:22:32.404000616 +0000 UTC m=+0.025978963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:22:32 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:32 compute-0 podman[254618]: 2026-01-21 14:22:32.541358555 +0000 UTC m=+0.163336902 container init 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 14:22:32 compute-0 podman[254618]: 2026-01-21 14:22:32.549340869 +0000 UTC m=+0.171319196 container start 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 14:22:32 compute-0 podman[254618]: 2026-01-21 14:22:32.553580832 +0000 UTC m=+0.175559179 container attach 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:22:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 14:22:32 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Jan 21 14:22:33 compute-0 silly_diffie[254635]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:22:33 compute-0 silly_diffie[254635]: --> All data devices are unavailable
Jan 21 14:22:33 compute-0 systemd[1]: libpod-2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08.scope: Deactivated successfully.
Jan 21 14:22:33 compute-0 podman[254618]: 2026-01-21 14:22:33.033254465 +0000 UTC m=+0.655232792 container died 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-efb5eb444f9a6b8cf72b22890113e8dc247bafa25e95cfa59db5510b4ab59868-merged.mount: Deactivated successfully.
Jan 21 14:22:33 compute-0 podman[254618]: 2026-01-21 14:22:33.095736645 +0000 UTC m=+0.717715012 container remove 2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:33 compute-0 systemd[1]: libpod-conmon-2b4f4363607f1be83252999f045b94defa4e699b4c7b0e58e92412a992b69f08.scope: Deactivated successfully.
Jan 21 14:22:33 compute-0 sudo[254540]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:33 compute-0 sudo[254668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:22:33 compute-0 sudo[254668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:33 compute-0 sudo[254668]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:33 compute-0 sudo[254693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:22:33 compute-0 sudo[254693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.5647796 +0000 UTC m=+0.040488406 container create 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:22:33 compute-0 systemd[1]: Started libpod-conmon-90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076.scope.
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.545248055 +0000 UTC m=+0.020956881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:22:33 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.678254899 +0000 UTC m=+0.153963725 container init 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.689173723 +0000 UTC m=+0.164882569 container start 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:22:33 compute-0 focused_poitras[254746]: 167 167
Jan 21 14:22:33 compute-0 systemd[1]: libpod-90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076.scope: Deactivated successfully.
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.704905227 +0000 UTC m=+0.180614073 container attach 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.705615743 +0000 UTC m=+0.181324559 container died 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b520f474f61d91cb6bc7d038e744ae05cf03bd03a78cc9db83e62d2e214880a5-merged.mount: Deactivated successfully.
Jan 21 14:22:33 compute-0 podman[254730]: 2026-01-21 14:22:33.766353471 +0000 UTC m=+0.242062287 container remove 90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_poitras, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:22:33 compute-0 systemd[1]: libpod-conmon-90a1ead64017df621e6e9ee472c524ba50d1a63e027fcdb014a06e27de467076.scope: Deactivated successfully.
Jan 21 14:22:33 compute-0 ceph-mon[75031]: pgmap v1288: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 14:22:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:22:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:22:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:22:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:22:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:22:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:22:33 compute-0 podman[254768]: 2026-01-21 14:22:33.973035466 +0000 UTC m=+0.048040049 container create 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 21 14:22:34 compute-0 systemd[1]: Started libpod-conmon-06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83.scope.
Jan 21 14:22:34 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:34 compute-0 podman[254768]: 2026-01-21 14:22:33.950686542 +0000 UTC m=+0.025691115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:22:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:34 compute-0 podman[254768]: 2026-01-21 14:22:34.072925485 +0000 UTC m=+0.147930058 container init 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 14:22:34 compute-0 podman[254768]: 2026-01-21 14:22:34.080160961 +0000 UTC m=+0.155165514 container start 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:22:34 compute-0 podman[254768]: 2026-01-21 14:22:34.085116221 +0000 UTC m=+0.160120774 container attach 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:34 compute-0 reverent_volhard[254784]: {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:     "0": [
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:         {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "devices": [
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "/dev/loop3"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             ],
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_name": "ceph_lv0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_size": "21470642176",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "name": "ceph_lv0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "tags": {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cluster_name": "ceph",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.crush_device_class": "",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.encrypted": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.objectstore": "bluestore",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osd_id": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.type": "block",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.vdo": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.with_tpm": "0"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             },
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "type": "block",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "vg_name": "ceph_vg0"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:         }
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:     ],
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:     "1": [
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:         {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "devices": [
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "/dev/loop4"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             ],
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_name": "ceph_lv1",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_size": "21470642176",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "name": "ceph_lv1",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "tags": {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cluster_name": "ceph",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.crush_device_class": "",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.encrypted": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.objectstore": "bluestore",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osd_id": "1",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.type": "block",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.vdo": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.with_tpm": "0"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             },
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "type": "block",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "vg_name": "ceph_vg1"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:         }
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:     ],
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:     "2": [
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:         {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "devices": [
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "/dev/loop5"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             ],
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_name": "ceph_lv2",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_size": "21470642176",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "name": "ceph_lv2",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "tags": {
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.cluster_name": "ceph",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.crush_device_class": "",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.encrypted": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.objectstore": "bluestore",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osd_id": "2",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.type": "block",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.vdo": "0",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:                 "ceph.with_tpm": "0"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             },
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "type": "block",
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:             "vg_name": "ceph_vg2"
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:         }
Jan 21 14:22:34 compute-0 reverent_volhard[254784]:     ]
Jan 21 14:22:34 compute-0 reverent_volhard[254784]: }
Jan 21 14:22:34 compute-0 systemd[1]: libpod-06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83.scope: Deactivated successfully.
Jan 21 14:22:34 compute-0 podman[254768]: 2026-01-21 14:22:34.381670922 +0000 UTC m=+0.456675475 container died 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67db0ca208345a1abf484853214f1d07ea054d34d5aaccd970944365378a31e-merged.mount: Deactivated successfully.
Jan 21 14:22:34 compute-0 podman[254768]: 2026-01-21 14:22:34.472286055 +0000 UTC m=+0.547290608 container remove 06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:22:34 compute-0 systemd[1]: libpod-conmon-06a835f19d5bbf42697cd8fae3e3e909859249aa8f1d77c5755108de9e52ae83.scope: Deactivated successfully.
Jan 21 14:22:34 compute-0 sudo[254693]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "format": "json"}]: dispatch
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b257f1d-b61f-419d-bc85-c380d554748f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b257f1d-b61f-419d-bc85-c380d554748f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b257f1d-b61f-419d-bc85-c380d554748f' of type subvolume
Jan 21 14:22:34 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:34.572+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b257f1d-b61f-419d-bc85-c380d554748f' of type subvolume
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b257f1d-b61f-419d-bc85-c380d554748f'' moved to trashcan
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:22:34 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b257f1d-b61f-419d-bc85-c380d554748f, vol_name:cephfs) < ""
Jan 21 14:22:34 compute-0 sudo[254807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:22:34 compute-0 sudo[254807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:34 compute-0 sudo[254807]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:34 compute-0 sudo[254832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:22:34 compute-0 sudo[254832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:34 compute-0 podman[254869]: 2026-01-21 14:22:34.970081179 +0000 UTC m=+0.035050064 container create f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:22:35 compute-0 systemd[1]: Started libpod-conmon-f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d.scope.
Jan 21 14:22:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:22:35 compute-0 podman[254869]: 2026-01-21 14:22:34.955381272 +0000 UTC m=+0.020350177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:22:35 compute-0 podman[254869]: 2026-01-21 14:22:35.059033432 +0000 UTC m=+0.124002337 container init f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 14:22:35 compute-0 podman[254869]: 2026-01-21 14:22:35.066547654 +0000 UTC m=+0.131516549 container start f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:35 compute-0 podman[254869]: 2026-01-21 14:22:35.069876186 +0000 UTC m=+0.134845091 container attach f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 21 14:22:35 compute-0 epic_hopper[254886]: 167 167
Jan 21 14:22:35 compute-0 systemd[1]: libpod-f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d.scope: Deactivated successfully.
Jan 21 14:22:35 compute-0 podman[254869]: 2026-01-21 14:22:35.072064438 +0000 UTC m=+0.137033323 container died f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 21 14:22:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d87bcfc5069ba9af18865b7c51e87fb4818088fc1ba55839f799594c76385c00-merged.mount: Deactivated successfully.
Jan 21 14:22:35 compute-0 podman[254869]: 2026-01-21 14:22:35.113735202 +0000 UTC m=+0.178704087 container remove f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hopper, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 14:22:35 compute-0 systemd[1]: libpod-conmon-f7811babc7d1eeba2878fdb9167d570c316375697f15b1f92dbc0cfe0b6eec1d.scope: Deactivated successfully.
Jan 21 14:22:35 compute-0 podman[254910]: 2026-01-21 14:22:35.292228081 +0000 UTC m=+0.049005522 container create e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 21 14:22:35 compute-0 systemd[1]: Started libpod-conmon-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope.
Jan 21 14:22:35 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:22:35 compute-0 podman[254910]: 2026-01-21 14:22:35.270085334 +0000 UTC m=+0.026862865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:22:35 compute-0 podman[254910]: 2026-01-21 14:22:35.379402492 +0000 UTC m=+0.136179963 container init e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:22:35 compute-0 podman[254910]: 2026-01-21 14:22:35.385114431 +0000 UTC m=+0.141891872 container start e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 14:22:35 compute-0 podman[254910]: 2026-01-21 14:22:35.388543053 +0000 UTC m=+0.145320494 container attach e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:22:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "format": "json"}]: dispatch
Jan 21 14:22:35 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b257f1d-b61f-419d-bc85-c380d554748f", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:35 compute-0 ceph-mon[75031]: pgmap v1289: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 2 op/s
Jan 21 14:22:36 compute-0 lvm[255003]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:22:36 compute-0 lvm[255005]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:22:36 compute-0 lvm[255003]: VG ceph_vg0 finished
Jan 21 14:22:36 compute-0 lvm[255005]: VG ceph_vg1 finished
Jan 21 14:22:36 compute-0 lvm[255007]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:22:36 compute-0 lvm[255007]: VG ceph_vg2 finished
Jan 21 14:22:36 compute-0 lvm[255008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:22:36 compute-0 lvm[255008]: VG ceph_vg1 finished
Jan 21 14:22:36 compute-0 condescending_feynman[254926]: {}
Jan 21 14:22:36 compute-0 systemd[1]: libpod-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope: Deactivated successfully.
Jan 21 14:22:36 compute-0 systemd[1]: libpod-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope: Consumed 1.466s CPU time.
Jan 21 14:22:36 compute-0 podman[254910]: 2026-01-21 14:22:36.279079947 +0000 UTC m=+1.035857408 container died e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c82466a8c95dcb541b96ae141a5491505d107b74ecb99ec13b86591f8ac10036-merged.mount: Deactivated successfully.
Jan 21 14:22:36 compute-0 podman[254910]: 2026-01-21 14:22:36.328743065 +0000 UTC m=+1.085520506 container remove e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:22:36 compute-0 systemd[1]: libpod-conmon-e7d5fbca21317031c22a622a9f2732b6080de5c3a1d50faa707a95704d07979e.scope: Deactivated successfully.
Jan 21 14:22:36 compute-0 sudo[254832]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:22:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:22:36 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:22:36 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:22:36 compute-0 sudo[255024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:22:36 compute-0 sudo[255024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:22:36 compute-0 sudo[255024]: pam_unix(sudo:session): session closed for user root
Jan 21 14:22:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 14:22:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:22:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:22:37 compute-0 ceph-mon[75031]: pgmap v1290: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 14:22:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 1 op/s
Jan 21 14:22:39 compute-0 ceph-mon[75031]: pgmap v1291: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 1 op/s
Jan 21 14:22:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:22:39
Jan 21 14:22:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:22:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:22:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log']
Jan 21 14:22:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:22:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc5063dedf0>)]
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50adb7b50>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a4a3850>)]
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:22:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:22:41 compute-0 ceph-mon[75031]: pgmap v1292: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 14:22:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:22:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 2 op/s
Jan 21 14:22:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:43 compute-0 ceph-mon[75031]: pgmap v1293: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 2 op/s
Jan 21 14:22:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.tnwklj(active, since 38m)
Jan 21 14:22:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Jan 21 14:22:44 compute-0 ceph-mon[75031]: mgrmap e22: compute-0.tnwklj(active, since 38m)
Jan 21 14:22:45 compute-0 ceph-mon[75031]: pgmap v1294: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/20533bf2-9642-4ef1-8d58-d49be22aa6ba'.
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/.meta.tmp'
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/.meta.tmp' to config b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6/.meta'
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "format": "json"}]: dispatch
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 14:22:45 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 14:22:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:22:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Jan 21 14:22:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:22:47 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "format": "json"}]: dispatch
Jan 21 14:22:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:22:48 compute-0 ceph-mon[75031]: pgmap v1295: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Jan 21 14:22:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "format": "json"}]: dispatch
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c251ad3b-9d06-4c01-a543-f9720f7a74a6' of type subvolume
Jan 21 14:22:48 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:48.842+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c251ad3b-9d06-4c01-a543-f9720f7a74a6' of type subvolume
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c251ad3b-9d06-4c01-a543-f9720f7a74a6'' moved to trashcan
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:22:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c251ad3b-9d06-4c01-a543-f9720f7a74a6, vol_name:cephfs) < ""
Jan 21 14:22:49 compute-0 ceph-mon[75031]: pgmap v1296: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s
Jan 21 14:22:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "format": "json"}]: dispatch
Jan 21 14:22:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c251ad3b-9d06-4c01-a543-f9720f7a74a6", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:49 compute-0 nova_compute[239261]: 2026-01-21 14:22:49.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:49 compute-0 nova_compute[239261]: 2026-01-21 14:22:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:22:49 compute-0 nova_compute[239261]: 2026-01-21 14:22:49.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:22:49 compute-0 nova_compute[239261]: 2026-01-21 14:22:49.907 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:22:50 compute-0 podman[255050]: 2026-01-21 14:22:50.3324485 +0000 UTC m=+0.050512559 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:22:50 compute-0 podman[255049]: 2026-01-21 14:22:50.362941492 +0000 UTC m=+0.084750931 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 3 op/s
Jan 21 14:22:50 compute-0 nova_compute[239261]: 2026-01-21 14:22:50.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662144933880528 of space, bias 1.0, pg target 0.19986434801641584 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005335333974999345 of space, bias 4.0, pg target 0.6402400769999215 quantized to 16 (current 16)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:22:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:22:51 compute-0 nova_compute[239261]: 2026-01-21 14:22:51.719 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:51 compute-0 ceph-mon[75031]: pgmap v1297: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 3 op/s
Jan 21 14:22:51 compute-0 nova_compute[239261]: 2026-01-21 14:22:51.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1837d3d1-766d-46d2-bd38-bb850ab9ec75'' moved to trashcan
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1837d3d1-766d-46d2-bd38-bb850ab9ec75, vol_name:cephfs) < ""
Jan 21 14:22:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 1 op/s
Jan 21 14:22:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.723 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "format": "json"}]: dispatch
Jan 21 14:22:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1837d3d1-766d-46d2-bd38-bb850ab9ec75", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:53 compute-0 ceph-mon[75031]: pgmap v1298: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 1 op/s
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.754 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:22:53 compute-0 nova_compute[239261]: 2026-01-21 14:22:53.755 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:22:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:22:54 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243427951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.335 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.504 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.505 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.505 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.505 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.586 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.587 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:22:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Jan 21 14:22:54 compute-0 nova_compute[239261]: 2026-01-21 14:22:54.606 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:22:54 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3243427951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:22:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:22:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446406851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:22:55 compute-0 nova_compute[239261]: 2026-01-21 14:22:55.210 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:22:55 compute-0 nova_compute[239261]: 2026-01-21 14:22:55.215 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:22:55 compute-0 nova_compute[239261]: 2026-01-21 14:22:55.351 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:22:55 compute-0 nova_compute[239261]: 2026-01-21 14:22:55.354 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:22:55 compute-0 nova_compute[239261]: 2026-01-21 14:22:55.355 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:22:55 compute-0 ceph-mon[75031]: pgmap v1299: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Jan 21 14:22:55 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/446406851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp'
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta.tmp' to config b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e/.meta'
Jan 21 14:22:55 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:717877ed-ee59-4b6f-a8b8-a5e824a0e143, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:22:56 compute-0 nova_compute[239261]: 2026-01-21 14:22:56.355 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:56 compute-0 nova_compute[239261]: 2026-01-21 14:22:56.356 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:56 compute-0 nova_compute[239261]: 2026-01-21 14:22:56.356 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:22:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 14:22:56 compute-0 nova_compute[239261]: 2026-01-21 14:22:56.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:22:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143_93465ce5-7efa-45bd-b994-86ad6664e631", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:56 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "snap_name": "717877ed-ee59-4b6f-a8b8-a5e824a0e143", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:57 compute-0 ceph-mon[75031]: pgmap v1300: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 14:22:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:22:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "format": "json"}]: dispatch
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:22:59 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:22:59.467+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cf9fedcb-41b1-4a3d-849f-ba456ffc232e' of type subvolume
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cf9fedcb-41b1-4a3d-849f-ba456ffc232e' of type subvolume
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "force": true, "format": "json"}]: dispatch
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cf9fedcb-41b1-4a3d-849f-ba456ffc232e'' moved to trashcan
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:22:59 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cf9fedcb-41b1-4a3d-849f-ba456ffc232e, vol_name:cephfs) < ""
Jan 21 14:22:59 compute-0 ceph-mon[75031]: pgmap v1301: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 3 op/s
Jan 21 14:23:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 5 op/s
Jan 21 14:23:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "format": "json"}]: dispatch
Jan 21 14:23:00 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cf9fedcb-41b1-4a3d-849f-ba456ffc232e", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 21 14:23:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 21 14:23:01 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 21 14:23:01 compute-0 ceph-mon[75031]: pgmap v1302: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 5 op/s
Jan 21 14:23:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 5 op/s
Jan 21 14:23:02 compute-0 ceph-mon[75031]: osdmap e162: 3 total, 3 up, 3 in
Jan 21 14:23:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:03 compute-0 ceph-mon[75031]: pgmap v1304: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 5 op/s
Jan 21 14:23:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 14:23:05 compute-0 ceph-mon[75031]: pgmap v1305: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 5 op/s
Jan 21 14:23:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:23:07 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:23:07.267 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:23:07 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:23:07.269 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:23:07 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:23:07.270 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:23:08 compute-0 ceph-mon[75031]: pgmap v1306: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 4 op/s
Jan 21 14:23:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 21 14:23:08 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 21 14:23:08 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 21 14:23:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 14:23:09 compute-0 ceph-mon[75031]: osdmap e163: 3 total, 3 up, 3 in
Jan 21 14:23:09 compute-0 ceph-mon[75031]: pgmap v1308: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 14:23:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 235 B/s rd, 47 KiB/s wr, 3 op/s
Jan 21 14:23:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:23:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:23:11 compute-0 ceph-mon[75031]: pgmap v1309: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 235 B/s rd, 47 KiB/s wr, 3 op/s
Jan 21 14:23:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:23:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:23:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:23:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:23:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 14:23:13 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:13 compute-0 ceph-mon[75031]: pgmap v1310: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 14:23:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 12 KiB/s wr, 1 op/s
Jan 21 14:23:15 compute-0 ceph-mon[75031]: pgmap v1311: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 12 KiB/s wr, 1 op/s
Jan 21 14:23:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:23:18 compute-0 ceph-mon[75031]: pgmap v1312: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:23:18 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:23:19 compute-0 ceph-mon[75031]: pgmap v1313: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Jan 21 14:23:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 21 14:23:21 compute-0 podman[255142]: 2026-01-21 14:23:21.353159481 +0000 UTC m=+0.069131442 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:23:21 compute-0 podman[255141]: 2026-01-21 14:23:21.395540011 +0000 UTC m=+0.114757241 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 21 14:23:21 compute-0 ceph-mon[75031]: pgmap v1314: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Jan 21 14:23:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:23:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270217180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:23:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:23:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270217180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:23:23 compute-0 ceph-mon[75031]: pgmap v1315: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4270217180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:23:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/4270217180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:23:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:25 compute-0 ceph-mon[75031]: pgmap v1316: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:27 compute-0 ceph-mon[75031]: pgmap v1317: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:28 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:29 compute-0 ceph-mon[75031]: pgmap v1318: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:32 compute-0 ceph-mon[75031]: pgmap v1319: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:33 compute-0 ceph-mon[75031]: pgmap v1320: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:33 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:23:33.914 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:23:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:23:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:23:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:23:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:23:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:35 compute-0 ceph-mon[75031]: pgmap v1321: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:36 compute-0 sudo[255184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:23:36 compute-0 sudo[255184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:36 compute-0 sudo[255184]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:36 compute-0 sudo[255209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:23:36 compute-0 sudo[255209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:37 compute-0 sudo[255209]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:23:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:23:37 compute-0 sudo[255265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:23:37 compute-0 sudo[255265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:37 compute-0 sudo[255265]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:37 compute-0 sudo[255290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:23:37 compute-0 sudo[255290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:37 compute-0 ceph-mon[75031]: pgmap v1322: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:23:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:23:37 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.786145314 +0000 UTC m=+0.047643379 container create 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:23:37 compute-0 systemd[1]: Started libpod-conmon-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope.
Jan 21 14:23:37 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.765764699 +0000 UTC m=+0.027262754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.873673563 +0000 UTC m=+0.135171608 container init 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.882108488 +0000 UTC m=+0.143606513 container start 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.885843469 +0000 UTC m=+0.147341494 container attach 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:23:37 compute-0 systemd[1]: libpod-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope: Deactivated successfully.
Jan 21 14:23:37 compute-0 upbeat_montalcini[255343]: 167 167
Jan 21 14:23:37 compute-0 conmon[255343]: conmon 2495f644454fdfafa71a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope/container/memory.events
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.889801145 +0000 UTC m=+0.151299170 container died 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 21 14:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c8a5a4791ac6e879d66693205d476f9c673e783577728d292039ede9bb8815-merged.mount: Deactivated successfully.
Jan 21 14:23:37 compute-0 podman[255327]: 2026-01-21 14:23:37.936514821 +0000 UTC m=+0.198012846 container remove 2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:23:37 compute-0 systemd[1]: libpod-conmon-2495f644454fdfafa71a83455130460121dc362811e00d28ecb1043765f5dc82.scope: Deactivated successfully.
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.099688088 +0000 UTC m=+0.041712804 container create 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:23:38 compute-0 systemd[1]: Started libpod-conmon-0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a.scope.
Jan 21 14:23:38 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.080378439 +0000 UTC m=+0.022403195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.192038463 +0000 UTC m=+0.134063239 container init 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.197922757 +0000 UTC m=+0.139947483 container start 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.20465435 +0000 UTC m=+0.146679096 container attach 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/9d3ee2ce-401b-4b9c-9303-f18eb5e8eade'.
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp'
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp' to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta'
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "format": "json"}]: dispatch
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:23:38 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:23:38 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:38 compute-0 thirsty_bassi[255382]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:23:38 compute-0 thirsty_bassi[255382]: --> All data devices are unavailable
Jan 21 14:23:38 compute-0 systemd[1]: libpod-0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a.scope: Deactivated successfully.
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.689854908 +0000 UTC m=+0.631879634 container died 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0feb4e4e321cda1c8fe307a777a6ee65991ca880a36bce1db126f05ead4bcf12-merged.mount: Deactivated successfully.
Jan 21 14:23:38 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:23:38 compute-0 podman[255366]: 2026-01-21 14:23:38.738497871 +0000 UTC m=+0.680522597 container remove 0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:23:38 compute-0 systemd[1]: libpod-conmon-0a2510c7a566950e2a09a5df663e33a7cfbd16c6d6c09217139645885aefeb0a.scope: Deactivated successfully.
Jan 21 14:23:38 compute-0 sudo[255290]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:38 compute-0 sudo[255414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:23:38 compute-0 sudo[255414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:38 compute-0 sudo[255414]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:38 compute-0 sudo[255439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:23:38 compute-0 sudo[255439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.296438847 +0000 UTC m=+0.056685689 container create 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:23:39 compute-0 systemd[1]: Started libpod-conmon-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope.
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.267680758 +0000 UTC m=+0.027927660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:23:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.394095402 +0000 UTC m=+0.154342244 container init 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.402789183 +0000 UTC m=+0.163035995 container start 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.406529994 +0000 UTC m=+0.166776836 container attach 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 21 14:23:39 compute-0 systemd[1]: libpod-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope: Deactivated successfully.
Jan 21 14:23:39 compute-0 flamboyant_lumiere[255491]: 167 167
Jan 21 14:23:39 compute-0 conmon[255491]: conmon 5917245ff794758140f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope/container/memory.events
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.408979163 +0000 UTC m=+0.169225995 container died 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 14:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-10d51dc34ca73f0b5617bef2a5b01cf17fa3c8a186db35a052e340917e406537-merged.mount: Deactivated successfully.
Jan 21 14:23:39 compute-0 podman[255475]: 2026-01-21 14:23:39.449597231 +0000 UTC m=+0.209844043 container remove 5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 21 14:23:39 compute-0 systemd[1]: libpod-conmon-5917245ff794758140f3befc75f6c1b6d696fc8a5a3f2bbbac359074941899c5.scope: Deactivated successfully.
Jan 21 14:23:39 compute-0 podman[255514]: 2026-01-21 14:23:39.634960868 +0000 UTC m=+0.052904857 container create 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:23:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:23:39
Jan 21 14:23:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:23:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:23:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups']
Jan 21 14:23:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:23:39 compute-0 systemd[1]: Started libpod-conmon-5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c.scope.
Jan 21 14:23:39 compute-0 podman[255514]: 2026-01-21 14:23:39.614250974 +0000 UTC m=+0.032195003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:23:39 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:39 compute-0 podman[255514]: 2026-01-21 14:23:39.735444432 +0000 UTC m=+0.153388441 container init 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 14:23:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:23:39 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "format": "json"}]: dispatch
Jan 21 14:23:39 compute-0 ceph-mon[75031]: pgmap v1323: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:23:39 compute-0 podman[255514]: 2026-01-21 14:23:39.742127523 +0000 UTC m=+0.160071512 container start 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 14:23:39 compute-0 podman[255514]: 2026-01-21 14:23:39.745438525 +0000 UTC m=+0.163382524 container attach 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]: {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:     "0": [
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:         {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "devices": [
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "/dev/loop3"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             ],
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_name": "ceph_lv0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_size": "21470642176",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "name": "ceph_lv0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "tags": {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cluster_name": "ceph",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.crush_device_class": "",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.encrypted": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.objectstore": "bluestore",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osd_id": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.type": "block",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.vdo": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.with_tpm": "0"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             },
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "type": "block",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "vg_name": "ceph_vg0"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:         }
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:     ],
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:     "1": [
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:         {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "devices": [
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "/dev/loop4"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             ],
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_name": "ceph_lv1",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_size": "21470642176",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "name": "ceph_lv1",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "tags": {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cluster_name": "ceph",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.crush_device_class": "",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.encrypted": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.objectstore": "bluestore",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osd_id": "1",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.type": "block",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.vdo": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.with_tpm": "0"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             },
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "type": "block",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "vg_name": "ceph_vg1"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:         }
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:     ],
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:     "2": [
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:         {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "devices": [
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "/dev/loop5"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             ],
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_name": "ceph_lv2",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_size": "21470642176",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "name": "ceph_lv2",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "tags": {
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.cluster_name": "ceph",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.crush_device_class": "",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.encrypted": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.objectstore": "bluestore",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osd_id": "2",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.type": "block",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.vdo": "0",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:                 "ceph.with_tpm": "0"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             },
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "type": "block",
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:             "vg_name": "ceph_vg2"
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:         }
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]:     ]
Jan 21 14:23:39 compute-0 sweet_blackwell[255530]: }
Jan 21 14:23:40 compute-0 systemd[1]: libpod-5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c.scope: Deactivated successfully.
Jan 21 14:23:40 compute-0 podman[255514]: 2026-01-21 14:23:40.021524257 +0000 UTC m=+0.439468246 container died 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 21 14:23:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s wr, 0 op/s
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6206f865ddd1e7835ae4a45ed231284db8455e9bc5d07d997581738374c168-merged.mount: Deactivated successfully.
Jan 21 14:23:41 compute-0 podman[255514]: 2026-01-21 14:23:41.070932493 +0000 UTC m=+1.488876482 container remove 5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 21 14:23:41 compute-0 systemd[1]: libpod-conmon-5ca650404ae01b009a96a40f08d851c067b5a246fe8a02951ac0884d06eeb90c.scope: Deactivated successfully.
Jan 21 14:23:41 compute-0 sudo[255439]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:41 compute-0 sudo[255553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:23:41 compute-0 sudo[255553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:41 compute-0 sudo[255553]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:23:41 compute-0 sudo[255578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:23:41 compute-0 sudo[255578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762", "format": "json"}]: dispatch
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:41 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:41 compute-0 podman[255616]: 2026-01-21 14:23:41.59805678 +0000 UTC m=+0.027267704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:23:41 compute-0 podman[255616]: 2026-01-21 14:23:41.862691094 +0000 UTC m=+0.291901958 container create cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 21 14:23:41 compute-0 systemd[1]: Started libpod-conmon-cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349.scope.
Jan 21 14:23:41 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:23:41 compute-0 podman[255616]: 2026-01-21 14:23:41.969678866 +0000 UTC m=+0.398889730 container init cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 21 14:23:41 compute-0 podman[255616]: 2026-01-21 14:23:41.976931893 +0000 UTC m=+0.406142757 container start cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:23:41 compute-0 dreamy_babbage[255632]: 167 167
Jan 21 14:23:41 compute-0 systemd[1]: libpod-cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349.scope: Deactivated successfully.
Jan 21 14:23:42 compute-0 podman[255616]: 2026-01-21 14:23:42.008846408 +0000 UTC m=+0.438057302 container attach cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 21 14:23:42 compute-0 podman[255616]: 2026-01-21 14:23:42.009769651 +0000 UTC m=+0.438980515 container died cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:23:42 compute-0 ceph-mon[75031]: pgmap v1324: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s wr, 0 op/s
Jan 21 14:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b10fa3b4837ca5f4eb89c6b954c39a0b8c047d927e5ab1a8f67b14c0d3b918dc-merged.mount: Deactivated successfully.
Jan 21 14:23:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:23:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:23:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:23:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:23:42 compute-0 podman[255616]: 2026-01-21 14:23:42.136704068 +0000 UTC m=+0.565914932 container remove cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 21 14:23:42 compute-0 systemd[1]: libpod-conmon-cd44bc8b323d5cb1a63778657c8f1429de44df4c9875fc6cdd261905be835349.scope: Deactivated successfully.
Jan 21 14:23:42 compute-0 podman[255656]: 2026-01-21 14:23:42.312698336 +0000 UTC m=+0.046399699 container create 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:23:42 compute-0 systemd[1]: Started libpod-conmon-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope.
Jan 21 14:23:42 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:23:42 compute-0 podman[255656]: 2026-01-21 14:23:42.292816293 +0000 UTC m=+0.026517686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:23:42 compute-0 podman[255656]: 2026-01-21 14:23:42.39713182 +0000 UTC m=+0.130833203 container init 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:23:42 compute-0 podman[255656]: 2026-01-21 14:23:42.403956296 +0000 UTC m=+0.137657659 container start 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 14:23:42 compute-0 podman[255656]: 2026-01-21 14:23:42.411068309 +0000 UTC m=+0.144769672 container attach 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:23:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s wr, 0 op/s
Jan 21 14:23:43 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762", "format": "json"}]: dispatch
Jan 21 14:23:43 compute-0 lvm[255752]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:23:43 compute-0 lvm[255751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:23:43 compute-0 lvm[255752]: VG ceph_vg1 finished
Jan 21 14:23:43 compute-0 lvm[255751]: VG ceph_vg0 finished
Jan 21 14:23:43 compute-0 lvm[255754]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:23:43 compute-0 lvm[255754]: VG ceph_vg2 finished
Jan 21 14:23:43 compute-0 hardcore_gates[255673]: {}
Jan 21 14:23:43 compute-0 systemd[1]: libpod-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope: Deactivated successfully.
Jan 21 14:23:43 compute-0 systemd[1]: libpod-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope: Consumed 1.314s CPU time.
Jan 21 14:23:43 compute-0 podman[255656]: 2026-01-21 14:23:43.194545308 +0000 UTC m=+0.928246691 container died 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c43ea0c3778ceecba2f130cefef7a15dcf9f458dd48ad40650188f354ad2274c-merged.mount: Deactivated successfully.
Jan 21 14:23:43 compute-0 podman[255656]: 2026-01-21 14:23:43.23205085 +0000 UTC m=+0.965752213 container remove 6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:23:43 compute-0 systemd[1]: libpod-conmon-6c4de1ad9559ef9161a1843605ce32c9999347f9b7655324a68112c5fef4e91f.scope: Deactivated successfully.
Jan 21 14:23:43 compute-0 sudo[255578]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:23:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:23:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:23:43 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:23:43 compute-0 sudo[255767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:23:43 compute-0 sudo[255767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:23:43 compute-0 sudo[255767]: pam_unix(sudo:session): session closed for user root
Jan 21 14:23:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:44 compute-0 ceph-mon[75031]: pgmap v1325: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s wr, 0 op/s
Jan 21 14:23:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:23:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:23:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 14:23:46 compute-0 ceph-mon[75031]: pgmap v1326: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 14:23:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 14:23:47 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:47 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp'
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp' to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta'
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp'
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta.tmp' to config b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1/.meta'
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a974555c-2f99-4804-bf49-5a8570c58762, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:48 compute-0 ceph-mon[75031]: pgmap v1327: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 14:23:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 14:23:48 compute-0 nova_compute[239261]: 2026-01-21 14:23:48.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762_89f00eb7-0d7f-4bfa-aebe-ef725e504018", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:49 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "snap_name": "a974555c-2f99-4804-bf49-5a8570c58762", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:50 compute-0 ceph-mon[75031]: pgmap v1328: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s wr, 1 op/s
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662150522907583 of space, bias 1.0, pg target 0.1998645156872275 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005554138018903753 of space, bias 4.0, pg target 0.6664965622684504 quantized to 16 (current 16)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:23:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "format": "json"}]: dispatch
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:536a2c39-721a-4234-bb20-8865a7392cf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:536a2c39-721a-4234-bb20-8865a7392cf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '536a2c39-721a-4234-bb20-8865a7392cf1' of type subvolume
Jan 21 14:23:51 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:23:51.892+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '536a2c39-721a-4234-bb20-8865a7392cf1' of type subvolume
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/536a2c39-721a-4234-bb20-8865a7392cf1'' moved to trashcan
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:23:51 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:536a2c39-721a-4234-bb20-8865a7392cf1, vol_name:cephfs) < ""
Jan 21 14:23:52 compute-0 nova_compute[239261]: 2026-01-21 14:23:52.041 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:52 compute-0 nova_compute[239261]: 2026-01-21 14:23:52.042 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:52 compute-0 nova_compute[239261]: 2026-01-21 14:23:52.042 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:23:52 compute-0 nova_compute[239261]: 2026-01-21 14:23:52.042 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:23:52 compute-0 ceph-mon[75031]: pgmap v1329: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Jan 21 14:23:52 compute-0 podman[255793]: 2026-01-21 14:23:52.329169524 +0000 UTC m=+0.053268866 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 21 14:23:52 compute-0 nova_compute[239261]: 2026-01-21 14:23:52.378 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:23:52 compute-0 nova_compute[239261]: 2026-01-21 14:23:52.378 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:52 compute-0 podman[255792]: 2026-01-21 14:23:52.384641883 +0000 UTC m=+0.108765706 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 21 14:23:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s wr, 2 op/s
Jan 21 14:23:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "format": "json"}]: dispatch
Jan 21 14:23:53 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "536a2c39-721a-4234-bb20-8865a7392cf1", "force": true, "format": "json"}]: dispatch
Jan 21 14:23:53 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:53 compute-0 nova_compute[239261]: 2026-01-21 14:23:53.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:53 compute-0 nova_compute[239261]: 2026-01-21 14:23:53.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:54 compute-0 ceph-mon[75031]: pgmap v1330: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s wr, 2 op/s
Jan 21 14:23:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 2 op/s
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.720 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.779 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.779 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.779 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.845 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:23:54 compute-0 nova_compute[239261]: 2026-01-21 14:23:54.846 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:23:55 compute-0 ceph-mon[75031]: pgmap v1331: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 2 op/s
Jan 21 14:23:55 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:23:55 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618769977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.354 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.499 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.500 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5014MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.501 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.501 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.984 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:23:55 compute-0 nova_compute[239261]: 2026-01-21 14:23:55.984 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.113 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing inventories for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.192 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating ProviderTree inventory for provider 172aa181-ce4f-4953-808e-b8a26e60249f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.193 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Updating inventory in ProviderTree for provider 172aa181-ce4f-4953-808e-b8a26e60249f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 21 14:23:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1618769977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.211 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing aggregate associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.237 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Refreshing trait associations for resource provider 172aa181-ce4f-4953-808e-b8a26e60249f, traits: COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.256 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:23:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 45 KiB/s wr, 3 op/s
Jan 21 14:23:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:23:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071421282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.768 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.773 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.854 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.856 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.856 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.857 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:56 compute-0 nova_compute[239261]: 2026-01-21 14:23:56.857 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 21 14:23:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 21 14:23:57 compute-0 ceph-mon[75031]: pgmap v1332: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 45 KiB/s wr, 3 op/s
Jan 21 14:23:57 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2071421282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:23:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 21 14:23:57 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 21 14:23:57 compute-0 nova_compute[239261]: 2026-01-21 14:23:57.819 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 54 KiB/s wr, 4 op/s
Jan 21 14:23:58 compute-0 ceph-mon[75031]: osdmap e164: 3 total, 3 up, 3 in
Jan 21 14:23:58 compute-0 nova_compute[239261]: 2026-01-21 14:23:58.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:23:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:23:59 compute-0 ceph-mon[75031]: pgmap v1334: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 54 KiB/s wr, 4 op/s
Jan 21 14:24:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 3 op/s
Jan 21 14:24:01 compute-0 ceph-mon[75031]: pgmap v1335: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 3 op/s
Jan 21 14:24:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 3 op/s
Jan 21 14:24:03 compute-0 ceph-mon[75031]: pgmap v1336: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 3 op/s
Jan 21 14:24:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 21 14:24:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 21 14:24:04 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 21 14:24:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s wr, 1 op/s
Jan 21 14:24:05 compute-0 ceph-mon[75031]: osdmap e165: 3 total, 3 up, 3 in
Jan 21 14:24:05 compute-0 ceph-mon[75031]: pgmap v1338: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s wr, 1 op/s
Jan 21 14:24:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 1 op/s
Jan 21 14:24:07 compute-0 ceph-mon[75031]: pgmap v1339: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 1 op/s
Jan 21 14:24:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 1 op/s
Jan 21 14:24:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:09 compute-0 ceph-mon[75031]: pgmap v1340: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 1 op/s
Jan 21 14:24:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Jan 21 14:24:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:24:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:24:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:24:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:24:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:24:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:24:12 compute-0 ceph-mon[75031]: pgmap v1341: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Jan 21 14:24:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Jan 21 14:24:12 compute-0 nova_compute[239261]: 2026-01-21 14:24:12.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:12 compute-0 nova_compute[239261]: 2026-01-21 14:24:12.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 21 14:24:12 compute-0 nova_compute[239261]: 2026-01-21 14:24:12.750 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 21 14:24:13 compute-0 ceph-mon[75031]: pgmap v1342: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Jan 21 14:24:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s wr, 0 op/s
Jan 21 14:24:14 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:24:14.787 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:24:14 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:24:14.788 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:24:15 compute-0 ceph-mon[75031]: pgmap v1343: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s wr, 0 op/s
Jan 21 14:24:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:24:17 compute-0 ceph-mon[75031]: pgmap v1344: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:24:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:24:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:19 compute-0 ceph-mon[75031]: pgmap v1345: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:24:20 compute-0 nova_compute[239261]: 2026-01-21 14:24:20.402 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:24:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:24:21 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6508 writes, 29K keys, 6508 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6508 writes, 6508 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1724 writes, 8397 keys, 1724 commit groups, 1.0 writes per commit group, ingest: 11.03 MB, 0.02 MB/s
                                           Interval WAL: 1724 writes, 1724 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     32.2      1.06              0.11        16    0.067       0      0       0.0       0.0
                                             L6      1/0    8.68 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     47.4     39.1      3.05              0.36        15    0.203     73K   8415       0.0       0.0
                                            Sum      1/0    8.68 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     35.2     37.3      4.11              0.47        31    0.133     73K   8415       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     52.6     53.9      0.89              0.15         8    0.111     24K   2633       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     47.4     39.1      3.05              0.36        15    0.203     73K   8415       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     32.2      1.06              0.11        15    0.071       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 4.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562240bf58d0#2 capacity: 304.00 MB usage: 16.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000183 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1026,15.64 MB,5.14359%) FilterBlock(32,213.55 KB,0.0685993%) IndexBlock(32,395.70 KB,0.127115%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 21 14:24:21 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:24:21.790 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:24:21 compute-0 ceph-mon[75031]: pgmap v1346: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s wr, 0 op/s
Jan 21 14:24:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:24:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/471390696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:24:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:24:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/471390696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:24:23 compute-0 podman[255882]: 2026-01-21 14:24:23.369343608 +0000 UTC m=+0.087887288 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 14:24:23 compute-0 podman[255881]: 2026-01-21 14:24:23.374244947 +0000 UTC m=+0.103525818 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 21 14:24:23 compute-0 ceph-mon[75031]: pgmap v1347: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/471390696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:24:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/471390696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:24:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:25 compute-0 ceph-mon[75031]: pgmap v1348: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/f62c360a-91ba-4a12-8a48-a3a783029d44'.
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp'
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp' to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta'
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "format": "json"}]: dispatch
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:27 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:27 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:24:27 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:24:27 compute-0 ceph-mon[75031]: pgmap v1349: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:27 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:24:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:24:29 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "format": "json"}]: dispatch
Jan 21 14:24:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:29 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b", "format": "json"}]: dispatch
Jan 21 14:24:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:29 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:30 compute-0 ceph-mon[75031]: pgmap v1350: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:24:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Jan 21 14:24:31 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b", "format": "json"}]: dispatch
Jan 21 14:24:32 compute-0 ceph-mon[75031]: pgmap v1351: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Jan 21 14:24:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Jan 21 14:24:33 compute-0 ceph-mon[75031]: pgmap v1352: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp'
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp' to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta'
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp'
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta.tmp' to config b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2/.meta'
Jan 21 14:24:33 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:129f980f-9630-48b1-bcde-e45a9ed0079b, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:24:33.915 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:24:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:24:33.916 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:24:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:24:33.916 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:24:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b_ca15cc81-265c-4731-8934-f7ef13bd3c7e", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:34 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "snap_name": "129f980f-9630-48b1-bcde-e45a9ed0079b", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 1 op/s
Jan 21 14:24:35 compute-0 ceph-mon[75031]: pgmap v1353: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 1 op/s
Jan 21 14:24:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 2 op/s
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f701d9-3332-493b-805e-f694262123e2", "format": "json"}]: dispatch
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a5f701d9-3332-493b-805e-f694262123e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a5f701d9-3332-493b-805e-f694262123e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:24:37 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:24:37.030+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a5f701d9-3332-493b-805e-f694262123e2' of type subvolume
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a5f701d9-3332-493b-805e-f694262123e2' of type subvolume
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a5f701d9-3332-493b-805e-f694262123e2'' moved to trashcan
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:24:37 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a5f701d9-3332-493b-805e-f694262123e2, vol_name:cephfs) < ""
Jan 21 14:24:37 compute-0 ceph-mon[75031]: pgmap v1354: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 2 op/s
Jan 21 14:24:37 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a5f701d9-3332-493b-805e-f694262123e2", "format": "json"}]: dispatch
Jan 21 14:24:37 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a5f701d9-3332-493b-805e-f694262123e2", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 2 op/s
Jan 21 14:24:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:24:39
Jan 21 14:24:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:24:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:24:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes']
Jan 21 14:24:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:24:39 compute-0 ceph-mon[75031]: pgmap v1355: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s wr, 2 op/s
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 4 op/s
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/5b0d28ac-7ccd-4441-b34b-f4cb942173d6'.
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp'
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp' to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta'
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "format": "json"}]: dispatch
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:40 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:24:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:24:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:24:41 compute-0 ceph-mon[75031]: pgmap v1356: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 4 op/s
Jan 21 14:24:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:24:41 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "format": "json"}]: dispatch
Jan 21 14:24:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:24:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:24:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:24:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:24:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:24:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 4 op/s
Jan 21 14:24:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 21 14:24:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 21 14:24:42 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 21 14:24:43 compute-0 sudo[255926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:24:43 compute-0 sudo[255926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:43 compute-0 sudo[255926]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:43 compute-0 sudo[255951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:24:43 compute-0 sudo[255951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:43 compute-0 ceph-mon[75031]: pgmap v1357: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 4 op/s
Jan 21 14:24:43 compute-0 ceph-mon[75031]: osdmap e166: 3 total, 3 up, 3 in
Jan 21 14:24:43 compute-0 sudo[255951]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:24:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:24:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:24:44 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f", "format": "json"}]: dispatch
Jan 21 14:24:44 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:24:44 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:24:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:24:44 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:24:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:24:44 compute-0 sudo[256007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:24:44 compute-0 sudo[256007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:44 compute-0 sudo[256007]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:44 compute-0 sudo[256032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:24:44 compute-0 sudo[256032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 14:24:44 compute-0 podman[256069]: 2026-01-21 14:24:44.70871789 +0000 UTC m=+0.024615849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:24:44 compute-0 podman[256069]: 2026-01-21 14:24:44.843833635 +0000 UTC m=+0.159731584 container create 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:24:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:24:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:24:44 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:24:44 compute-0 systemd[1]: Started libpod-conmon-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope.
Jan 21 14:24:44 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:24:45 compute-0 podman[256069]: 2026-01-21 14:24:45.176126705 +0000 UTC m=+0.492024684 container init 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:24:45 compute-0 podman[256069]: 2026-01-21 14:24:45.183584247 +0000 UTC m=+0.499482176 container start 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 21 14:24:45 compute-0 podman[256069]: 2026-01-21 14:24:45.187393068 +0000 UTC m=+0.503291017 container attach 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:24:45 compute-0 happy_wozniak[256085]: 167 167
Jan 21 14:24:45 compute-0 systemd[1]: libpod-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope: Deactivated successfully.
Jan 21 14:24:45 compute-0 conmon[256085]: conmon 7b15118e68ac88ff9385 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope/container/memory.events
Jan 21 14:24:45 compute-0 podman[256069]: 2026-01-21 14:24:45.192029002 +0000 UTC m=+0.507926951 container died 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-88193525b2af2954c81ffe4b73e80de7d78b705b68b47415e69939f716d30e51-merged.mount: Deactivated successfully.
Jan 21 14:24:45 compute-0 podman[256069]: 2026-01-21 14:24:45.29722465 +0000 UTC m=+0.613122589 container remove 7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:24:45 compute-0 systemd[1]: libpod-conmon-7b15118e68ac88ff938522abba17f0621c0608ae5589c0bbd2876a9f3625d2d2.scope: Deactivated successfully.
Jan 21 14:24:45 compute-0 podman[256109]: 2026-01-21 14:24:45.487925116 +0000 UTC m=+0.060533102 container create ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:24:45 compute-0 systemd[1]: Started libpod-conmon-ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c.scope.
Jan 21 14:24:45 compute-0 podman[256109]: 2026-01-21 14:24:45.459338692 +0000 UTC m=+0.031946708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:24:45 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:45 compute-0 podman[256109]: 2026-01-21 14:24:45.592634563 +0000 UTC m=+0.165242579 container init ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:24:45 compute-0 podman[256109]: 2026-01-21 14:24:45.600833172 +0000 UTC m=+0.173441158 container start ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:24:45 compute-0 podman[256109]: 2026-01-21 14:24:45.610580259 +0000 UTC m=+0.183188275 container attach ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:24:45 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f", "format": "json"}]: dispatch
Jan 21 14:24:45 compute-0 ceph-mon[75031]: pgmap v1359: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 14:24:46 compute-0 fervent_hopper[256125]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:24:46 compute-0 fervent_hopper[256125]: --> All data devices are unavailable
Jan 21 14:24:46 compute-0 systemd[1]: libpod-ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c.scope: Deactivated successfully.
Jan 21 14:24:46 compute-0 podman[256109]: 2026-01-21 14:24:46.155236932 +0000 UTC m=+0.727844918 container died ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 14:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e408c233aa89fcef9178e9ca4a075d143b2513ddb0c0d13bd61b445b9c263ecc-merged.mount: Deactivated successfully.
Jan 21 14:24:46 compute-0 podman[256109]: 2026-01-21 14:24:46.309134003 +0000 UTC m=+0.881741989 container remove ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 21 14:24:46 compute-0 systemd[1]: libpod-conmon-ec0cf30ff764e202050975e42a73d1e7c759a4a72d81c4f45450fb750c66971c.scope: Deactivated successfully.
Jan 21 14:24:46 compute-0 sudo[256032]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:46 compute-0 sudo[256159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:24:46 compute-0 sudo[256159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:46 compute-0 sudo[256159]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:46 compute-0 sudo[256184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:24:46 compute-0 sudo[256184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 14:24:46 compute-0 podman[256222]: 2026-01-21 14:24:46.779158822 +0000 UTC m=+0.048483209 container create aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 21 14:24:46 compute-0 podman[256222]: 2026-01-21 14:24:46.758177232 +0000 UTC m=+0.027501659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:24:46 compute-0 systemd[1]: Started libpod-conmon-aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625.scope.
Jan 21 14:24:46 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:24:47 compute-0 podman[256222]: 2026-01-21 14:24:47.093446205 +0000 UTC m=+0.362770642 container init aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 21 14:24:47 compute-0 podman[256222]: 2026-01-21 14:24:47.102102865 +0000 UTC m=+0.371427252 container start aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 21 14:24:47 compute-0 crazy_chebyshev[256239]: 167 167
Jan 21 14:24:47 compute-0 systemd[1]: libpod-aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625.scope: Deactivated successfully.
Jan 21 14:24:47 compute-0 podman[256222]: 2026-01-21 14:24:47.367417365 +0000 UTC m=+0.636741782 container attach aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:24:47 compute-0 podman[256222]: 2026-01-21 14:24:47.36802385 +0000 UTC m=+0.637348247 container died aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 21 14:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a8507ec3b4476a3935871c3a5233e39ec0ba565fd6f39b595ad391d804897bd-merged.mount: Deactivated successfully.
Jan 21 14:24:47 compute-0 podman[256222]: 2026-01-21 14:24:47.464719742 +0000 UTC m=+0.734044149 container remove aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 14:24:47 compute-0 systemd[1]: libpod-conmon-aac59c070e010bb5a05916399bd34e8e6b7a190d65d09fe2c33038edc7679625.scope: Deactivated successfully.
Jan 21 14:24:47 compute-0 podman[256261]: 2026-01-21 14:24:47.724834967 +0000 UTC m=+0.110664882 container create 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:24:47 compute-0 podman[256261]: 2026-01-21 14:24:47.642969136 +0000 UTC m=+0.028799121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:24:47 compute-0 systemd[1]: Started libpod-conmon-3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b.scope.
Jan 21 14:24:47 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:47 compute-0 podman[256261]: 2026-01-21 14:24:47.809125616 +0000 UTC m=+0.194955541 container init 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 21 14:24:47 compute-0 podman[256261]: 2026-01-21 14:24:47.81629582 +0000 UTC m=+0.202125725 container start 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:24:47 compute-0 podman[256261]: 2026-01-21 14:24:47.821428015 +0000 UTC m=+0.207257950 container attach 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:24:47 compute-0 ceph-mon[75031]: pgmap v1360: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 14:24:48 compute-0 brave_wu[256278]: {
Jan 21 14:24:48 compute-0 brave_wu[256278]:     "0": [
Jan 21 14:24:48 compute-0 brave_wu[256278]:         {
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "devices": [
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "/dev/loop3"
Jan 21 14:24:48 compute-0 brave_wu[256278]:             ],
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_name": "ceph_lv0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_size": "21470642176",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "name": "ceph_lv0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "tags": {
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cluster_name": "ceph",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.crush_device_class": "",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.encrypted": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.objectstore": "bluestore",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osd_id": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.type": "block",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.vdo": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.with_tpm": "0"
Jan 21 14:24:48 compute-0 brave_wu[256278]:             },
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "type": "block",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "vg_name": "ceph_vg0"
Jan 21 14:24:48 compute-0 brave_wu[256278]:         }
Jan 21 14:24:48 compute-0 brave_wu[256278]:     ],
Jan 21 14:24:48 compute-0 brave_wu[256278]:     "1": [
Jan 21 14:24:48 compute-0 brave_wu[256278]:         {
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "devices": [
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "/dev/loop4"
Jan 21 14:24:48 compute-0 brave_wu[256278]:             ],
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_name": "ceph_lv1",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_size": "21470642176",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "name": "ceph_lv1",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "tags": {
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cluster_name": "ceph",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.crush_device_class": "",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.encrypted": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.objectstore": "bluestore",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osd_id": "1",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.type": "block",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.vdo": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.with_tpm": "0"
Jan 21 14:24:48 compute-0 brave_wu[256278]:             },
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "type": "block",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "vg_name": "ceph_vg1"
Jan 21 14:24:48 compute-0 brave_wu[256278]:         }
Jan 21 14:24:48 compute-0 brave_wu[256278]:     ],
Jan 21 14:24:48 compute-0 brave_wu[256278]:     "2": [
Jan 21 14:24:48 compute-0 brave_wu[256278]:         {
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "devices": [
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "/dev/loop5"
Jan 21 14:24:48 compute-0 brave_wu[256278]:             ],
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_name": "ceph_lv2",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_size": "21470642176",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "name": "ceph_lv2",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "tags": {
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.cluster_name": "ceph",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.crush_device_class": "",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.encrypted": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.objectstore": "bluestore",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osd_id": "2",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.type": "block",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.vdo": "0",
Jan 21 14:24:48 compute-0 brave_wu[256278]:                 "ceph.with_tpm": "0"
Jan 21 14:24:48 compute-0 brave_wu[256278]:             },
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "type": "block",
Jan 21 14:24:48 compute-0 brave_wu[256278]:             "vg_name": "ceph_vg2"
Jan 21 14:24:48 compute-0 brave_wu[256278]:         }
Jan 21 14:24:48 compute-0 brave_wu[256278]:     ]
Jan 21 14:24:48 compute-0 brave_wu[256278]: }
Jan 21 14:24:48 compute-0 systemd[1]: libpod-3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b.scope: Deactivated successfully.
Jan 21 14:24:48 compute-0 podman[256261]: 2026-01-21 14:24:48.129634729 +0000 UTC m=+0.515464644 container died 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-887cd5d59d253afc6a15e709f80af5c1a72dc69bc4e75f5619ed23a1cece62f8-merged.mount: Deactivated successfully.
Jan 21 14:24:48 compute-0 podman[256261]: 2026-01-21 14:24:48.181518341 +0000 UTC m=+0.567348246 container remove 3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:24:48 compute-0 systemd[1]: libpod-conmon-3390730f24569fdd4bdb4c7065d05540a454320536d9cd2f1d337b861d04917b.scope: Deactivated successfully.
Jan 21 14:24:48 compute-0 sudo[256184]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:48 compute-0 sudo[256299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:24:48 compute-0 sudo[256299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:48 compute-0 sudo[256299]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:48 compute-0 sudo[256324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:24:48 compute-0 sudo[256324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.633622464 +0000 UTC m=+0.044160826 container create 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:24:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 14:24:48 compute-0 systemd[1]: Started libpod-conmon-7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25.scope.
Jan 21 14:24:48 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.707739746 +0000 UTC m=+0.118278128 container init 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.618408874 +0000 UTC m=+0.028947256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.716829206 +0000 UTC m=+0.127367608 container start 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.720920626 +0000 UTC m=+0.131459018 container attach 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 21 14:24:48 compute-0 intelligent_goldwasser[256377]: 167 167
Jan 21 14:24:48 compute-0 systemd[1]: libpod-7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25.scope: Deactivated successfully.
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.721827058 +0000 UTC m=+0.132365440 container died 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 21 14:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3164e00fccf5431cb6b5d67db420ccb47a7bf25e645484aec9e90940d1e886a5-merged.mount: Deactivated successfully.
Jan 21 14:24:48 compute-0 podman[256360]: 2026-01-21 14:24:48.765438588 +0000 UTC m=+0.175976980 container remove 7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 21 14:24:48 compute-0 systemd[1]: libpod-conmon-7bde00704e5fcd560f86c59025aa60075e86bce6de1a651f62168e30cfe6ab25.scope: Deactivated successfully.
Jan 21 14:24:48 compute-0 podman[256402]: 2026-01-21 14:24:48.964437897 +0000 UTC m=+0.054293651 container create 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:24:49 compute-0 systemd[1]: Started libpod-conmon-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope.
Jan 21 14:24:49 compute-0 podman[256402]: 2026-01-21 14:24:48.938484836 +0000 UTC m=+0.028340670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:24:49 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:24:49 compute-0 podman[256402]: 2026-01-21 14:24:49.066110139 +0000 UTC m=+0.155965913 container init 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 21 14:24:49 compute-0 podman[256402]: 2026-01-21 14:24:49.072664258 +0000 UTC m=+0.162520002 container start 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:24:49 compute-0 podman[256402]: 2026-01-21 14:24:49.07642093 +0000 UTC m=+0.166276704 container attach 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:24:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:49 compute-0 lvm[256496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:24:49 compute-0 lvm[256496]: VG ceph_vg0 finished
Jan 21 14:24:49 compute-0 lvm[256497]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:24:49 compute-0 lvm[256497]: VG ceph_vg1 finished
Jan 21 14:24:49 compute-0 lvm[256499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:24:49 compute-0 lvm[256499]: VG ceph_vg2 finished
Jan 21 14:24:49 compute-0 flamboyant_lehmann[256418]: {}
Jan 21 14:24:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 21 14:24:49 compute-0 systemd[1]: libpod-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope: Deactivated successfully.
Jan 21 14:24:49 compute-0 systemd[1]: libpod-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope: Consumed 1.321s CPU time.
Jan 21 14:24:49 compute-0 podman[256402]: 2026-01-21 14:24:49.899906623 +0000 UTC m=+0.989762427 container died 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp'
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp' to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta'
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:49 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:50 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 21 14:24:50 compute-0 ceph-mon[75031]: pgmap v1361: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Jan 21 14:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec1059d36898f6fde2fe0e44fb6b190035723e05127a80f37da43820fc7da30a-merged.mount: Deactivated successfully.
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp'
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta.tmp' to config b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0/.meta'
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d9839651-f469-4415-89ae-cc62bff4e10f, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:50 compute-0 podman[256402]: 2026-01-21 14:24:50.467220917 +0000 UTC m=+1.557076691 container remove 80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_lehmann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:24:50 compute-0 sudo[256324]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:24:50 compute-0 systemd[1]: libpod-conmon-80511def2d1c8dc07679fbd8a3a89780d9d73544b502c02fe57a6476e7d71ee9.scope: Deactivated successfully.
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s wr, 3 op/s
Jan 21 14:24:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:24:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:24:50 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662203308163098 of space, bias 1.0, pg target 0.19986609924489293 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005818088826097966 of space, bias 4.0, pg target 0.6981706591317559 quantized to 16 (current 16)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:24:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:24:50 compute-0 sudo[256515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:24:50 compute-0 sudo[256515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:24:50 compute-0 sudo[256515]: pam_unix(sudo:session): session closed for user root
Jan 21 14:24:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f_58879d21-f788-4eae-af79-848cdb9584de", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:51 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "snap_name": "d9839651-f469-4415-89ae-cc62bff4e10f", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:51 compute-0 ceph-mon[75031]: osdmap e167: 3 total, 3 up, 3 in
Jan 21 14:24:51 compute-0 ceph-mon[75031]: pgmap v1363: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s wr, 3 op/s
Jan 21 14:24:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:24:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:24:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 2 op/s
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "format": "json"}]: dispatch
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4564206e-a1af-4abb-a427-9d87957a49e0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4564206e-a1af-4abb-a427-9d87957a49e0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:24:53 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:24:53.372+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4564206e-a1af-4abb-a427-9d87957a49e0' of type subvolume
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4564206e-a1af-4abb-a427-9d87957a49e0' of type subvolume
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4564206e-a1af-4abb-a427-9d87957a49e0'' moved to trashcan
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:24:53 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4564206e-a1af-4abb-a427-9d87957a49e0, vol_name:cephfs) < ""
Jan 21 14:24:53 compute-0 ceph-mon[75031]: pgmap v1364: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 2 op/s
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.741 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.741 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.758 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.759 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:53 compute-0 nova_compute[239261]: 2026-01-21 14:24:53.760 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:54 compute-0 podman[256541]: 2026-01-21 14:24:54.344325867 +0000 UTC m=+0.065281067 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 21 14:24:54 compute-0 podman[256540]: 2026-01-21 14:24:54.378478218 +0000 UTC m=+0.094729294 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:24:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "format": "json"}]: dispatch
Jan 21 14:24:54 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4564206e-a1af-4abb-a427-9d87957a49e0", "force": true, "format": "json"}]: dispatch
Jan 21 14:24:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s wr, 2 op/s
Jan 21 14:24:55 compute-0 ceph-mon[75031]: pgmap v1365: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s wr, 2 op/s
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.761 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:24:55 compute-0 nova_compute[239261]: 2026-01-21 14:24:55.762 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:24:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:24:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943741548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.353 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.527 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.529 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4991MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.529 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.529 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:24:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/943741548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.613 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.613 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:24:56 compute-0 nova_compute[239261]: 2026-01-21 14:24:56.632 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:24:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 3 op/s
Jan 21 14:24:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:24:57 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528613723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:24:57 compute-0 nova_compute[239261]: 2026-01-21 14:24:57.183 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:24:57 compute-0 nova_compute[239261]: 2026-01-21 14:24:57.188 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:24:57 compute-0 nova_compute[239261]: 2026-01-21 14:24:57.227 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:24:57 compute-0 nova_compute[239261]: 2026-01-21 14:24:57.229 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:24:57 compute-0 nova_compute[239261]: 2026-01-21 14:24:57.230 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:24:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 21 14:24:57 compute-0 ceph-mon[75031]: pgmap v1366: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 3 op/s
Jan 21 14:24:57 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2528613723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:24:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 21 14:24:57 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 21 14:24:58 compute-0 nova_compute[239261]: 2026-01-21 14:24:58.230 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:58 compute-0 nova_compute[239261]: 2026-01-21 14:24:58.230 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:24:58 compute-0 nova_compute[239261]: 2026-01-21 14:24:58.230 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:24:58 compute-0 ceph-mon[75031]: osdmap e168: 3 total, 3 up, 3 in
Jan 21 14:24:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:24:59 compute-0 ceph-mon[75031]: pgmap v1368: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:24:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:24:59 compute-0 nova_compute[239261]: 2026-01-21 14:24:59.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Jan 21 14:25:01 compute-0 ceph-mon[75031]: pgmap v1369: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Jan 21 14:25:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Jan 21 14:25:03 compute-0 ceph-mon[75031]: pgmap v1370: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Jan 21 14:25:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 21 14:25:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 21 14:25:04 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 21 14:25:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 2 op/s
Jan 21 14:25:05 compute-0 ceph-mon[75031]: osdmap e169: 3 total, 3 up, 3 in
Jan 21 14:25:05 compute-0 ceph-mon[75031]: pgmap v1372: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 2 op/s
Jan 21 14:25:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 225 B/s rd, 29 KiB/s wr, 2 op/s
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/f269e34a-77c5-41c7-8925-5dbbabe47fe9'.
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp'
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp' to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta'
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "format": "json"}]: dispatch
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:07 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:07 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 21 14:25:07 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:25:07 compute-0 ceph-mon[75031]: pgmap v1373: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 225 B/s rd, 29 KiB/s wr, 2 op/s
Jan 21 14:25:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Jan 21 14:25:07 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "format": "json"}]: dispatch
Jan 21 14:25:07 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/1571645838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 21 14:25:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Jan 21 14:25:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:09 compute-0 ceph-mon[75031]: pgmap v1374: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Jan 21 14:25:10 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7", "format": "json"}]: dispatch
Jan 21 14:25:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:10 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:25:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:25:12 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7", "format": "json"}]: dispatch
Jan 21 14:25:12 compute-0 ceph-mon[75031]: pgmap v1375: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:25:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:25:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:25:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:25:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:14 compute-0 ceph-mon[75031]: pgmap v1376: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s wr, 1 op/s
Jan 21 14:25:15 compute-0 ceph-mon[75031]: pgmap v1377: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s wr, 1 op/s
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6", "force": true, "format": "json"}]: dispatch
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp'
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp' to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta'
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7", "force": true, "format": "json"}]: dispatch
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp'
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta.tmp' to config b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6/.meta'
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11cda10a-e8ff-460e-8c56-b778054d00c7, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7_31812bf0-d81e-40c3-a226-5318705677c6", "force": true, "format": "json"}]: dispatch
Jan 21 14:25:17 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "snap_name": "11cda10a-e8ff-460e-8c56-b778054d00c7", "force": true, "format": "json"}]: dispatch
Jan 21 14:25:17 compute-0 ceph-mon[75031]: pgmap v1378: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "format": "json"}]: dispatch
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074ccac6-f42c-493c-9d0b-aab404cacaf6' of type subvolume
Jan 21 14:25:19 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:25:19.626+0000 7fc516655640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074ccac6-f42c-493c-9d0b-aab404cacaf6' of type subvolume
Jan 21 14:25:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "force": true, "format": "json"}]: dispatch
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/074ccac6-f42c-493c-9d0b-aab404cacaf6'' moved to trashcan
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 21 14:25:19 compute-0 ceph-mgr[75322]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074ccac6-f42c-493c-9d0b-aab404cacaf6, vol_name:cephfs) < ""
Jan 21 14:25:19 compute-0 ceph-mon[75031]: pgmap v1379: 305 pgs: 305 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Jan 21 14:25:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:25:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "format": "json"}]: dispatch
Jan 21 14:25:21 compute-0 ceph-mon[75031]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "074ccac6-f42c-493c-9d0b-aab404cacaf6", "force": true, "format": "json"}]: dispatch
Jan 21 14:25:22 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 21 14:25:22 compute-0 ceph-mon[75031]: pgmap v1380: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 3 op/s
Jan 21 14:25:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 14:25:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 21 14:25:23 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 21 14:25:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:25:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/223500051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:25:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:25:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/223500051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:25:24 compute-0 ceph-mon[75031]: pgmap v1381: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Jan 21 14:25:24 compute-0 ceph-mon[75031]: osdmap e170: 3 total, 3 up, 3 in
Jan 21 14:25:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/223500051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:25:24 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/223500051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:25:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 14:25:25 compute-0 podman[256629]: 2026-01-21 14:25:25.356518091 +0000 UTC m=+0.077690491 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 21 14:25:25 compute-0 podman[256628]: 2026-01-21 14:25:25.370210913 +0000 UTC m=+0.086970436 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 21 14:25:26 compute-0 ceph-mon[75031]: pgmap v1383: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 55 KiB/s wr, 3 op/s
Jan 21 14:25:26 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:25:26.460 155179 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:20:fb', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'c6:a2:f4:1c:90:f4'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 21 14:25:26 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:25:26.462 155179 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 21 14:25:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 14:25:27 compute-0 ceph-mon[75031]: pgmap v1384: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 14:25:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:25:27 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9884 writes, 35K keys, 9884 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9884 writes, 2661 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2879 writes, 8594 keys, 2879 commit groups, 1.0 writes per commit group, ingest: 11.19 MB, 0.02 MB/s
                                           Interval WAL: 2879 writes, 1188 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:25:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 14:25:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:29 compute-0 ceph-mon[75031]: pgmap v1385: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 55 KiB/s wr, 4 op/s
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.848756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529848825, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2392, "num_deletes": 510, "total_data_size": 3625397, "memory_usage": 3687456, "flush_reason": "Manual Compaction"}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529874267, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3567065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28597, "largest_seqno": 30988, "table_properties": {"data_size": 3556652, "index_size": 6139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 24805, "raw_average_key_size": 19, "raw_value_size": 3533546, "raw_average_value_size": 2811, "num_data_blocks": 270, "num_entries": 1257, "num_filter_entries": 1257, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769005327, "oldest_key_time": 1769005327, "file_creation_time": 1769005529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 25543 microseconds, and 9079 cpu microseconds.
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.874308) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3567065 bytes OK
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.874326) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.877751) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.877773) EVENT_LOG_v1 {"time_micros": 1769005529877767, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.877793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3614275, prev total WAL file size 3614275, number of live WAL files 2.
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.878952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3483KB)], [62(8891KB)]
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529879031, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12671830, "oldest_snapshot_seqno": -1}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6039 keys, 10914663 bytes, temperature: kUnknown
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529955480, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10914663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10871730, "index_size": 26759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 152118, "raw_average_key_size": 25, "raw_value_size": 10760917, "raw_average_value_size": 1781, "num_data_blocks": 1095, "num_entries": 6039, "num_filter_entries": 6039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769003058, "oldest_key_time": 0, "file_creation_time": 1769005529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0890460c-1efa-4b98-b37d-c7b2c3489544", "db_session_id": "MNCZ0UYV5GPEBH7LDUF1", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.955777) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10914663 bytes
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.957596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.5 rd, 142.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7076, records dropped: 1037 output_compression: NoCompression
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.957611) EVENT_LOG_v1 {"time_micros": 1769005529957603, "job": 34, "event": "compaction_finished", "compaction_time_micros": 76574, "compaction_time_cpu_micros": 31531, "output_level": 6, "num_output_files": 1, "total_output_size": 10914663, "num_input_records": 7076, "num_output_records": 6039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529958387, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769005529959764, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.878842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:25:29 compute-0 ceph-mon[75031]: rocksdb: (Original Log Time 2026/01/21-14:25:29.959906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 21 14:25:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 14:25:31 compute-0 ceph-mon[75031]: pgmap v1386: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 14:25:31 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:25:31.464 155179 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3ade990a-d6f9-4724-a58c-009e4fc34364, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 21 14:25:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:25:32 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4725 syncs, 3.15 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4491 writes, 14K keys, 4491 commit groups, 1.0 writes per commit group, ingest: 21.35 MB, 0.04 MB/s
                                           Interval WAL: 4491 writes, 1896 syncs, 2.37 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:25:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 14:25:33 compute-0 ceph-mon[75031]: pgmap v1387: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Jan 21 14:25:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:25:33.917 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:25:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:25:33.918 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:25:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:25:33.918 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:25:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 21 14:25:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 21 14:25:34 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 21 14:25:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 1 op/s
Jan 21 14:25:35 compute-0 ceph-mon[75031]: osdmap e171: 3 total, 3 up, 3 in
Jan 21 14:25:35 compute-0 ceph-mon[75031]: pgmap v1389: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 1 op/s
Jan 21 14:25:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Jan 21 14:25:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:25:37 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 9322 writes, 33K keys, 9322 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9322 writes, 2294 syncs, 4.06 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2348 writes, 6511 keys, 2348 commit groups, 1.0 writes per commit group, ingest: 6.88 MB, 0.01 MB/s
                                           Interval WAL: 2348 writes, 874 syncs, 2.69 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:25:37 compute-0 ceph-mon[75031]: pgmap v1390: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Jan 21 14:25:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Jan 21 14:25:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:25:39
Jan 21 14:25:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:25:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:25:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:25:39 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:25:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms']
Jan 21 14:25:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:25:39 compute-0 ceph-mon[75031]: pgmap v1391: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Jan 21 14:25:40 compute-0 ceph-mgr[75322]: [devicehealth INFO root] Check health
Jan 21 14:25:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:25:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:25:41 compute-0 ceph-mon[75031]: pgmap v1392: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:25:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:25:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:25:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:25:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:44 compute-0 ceph-mon[75031]: pgmap v1393: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:45 compute-0 ceph-mon[75031]: pgmap v1394: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:47 compute-0 ceph-mon[75031]: pgmap v1395: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:49 compute-0 ceph-mon[75031]: pgmap v1396: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662106431694153 of space, bias 1.0, pg target 0.19986319295082458 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006057279001434559 of space, bias 4.0, pg target 0.726873480172147 quantized to 16 (current 16)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:25:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:25:50 compute-0 sudo[256674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:25:50 compute-0 sudo[256674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:50 compute-0 sudo[256674]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:50 compute-0 sudo[256699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:25:50 compute-0 sudo[256699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:51 compute-0 sudo[256699]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:25:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:25:51 compute-0 sudo[256753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:25:51 compute-0 sudo[256753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:51 compute-0 sudo[256753]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:51 compute-0 sudo[256778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:25:51 compute-0 sudo[256778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:51 compute-0 ceph-mon[75031]: pgmap v1397: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:25:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:25:51 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.044453193 +0000 UTC m=+0.049622018 container create 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 21 14:25:52 compute-0 systemd[1]: Started libpod-conmon-4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea.scope.
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.01886958 +0000 UTC m=+0.024038465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:25:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.147311554 +0000 UTC m=+0.152480459 container init 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.157731177 +0000 UTC m=+0.162900002 container start 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.164527622 +0000 UTC m=+0.169696517 container attach 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:25:52 compute-0 gallant_easley[256831]: 167 167
Jan 21 14:25:52 compute-0 systemd[1]: libpod-4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea.scope: Deactivated successfully.
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.167603178 +0000 UTC m=+0.172771993 container died 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-31f818bc57662ad9fad716c0752e57bca7cfebc80397d3f8504a5e80c0ef0286-merged.mount: Deactivated successfully.
Jan 21 14:25:52 compute-0 podman[256815]: 2026-01-21 14:25:52.226837298 +0000 UTC m=+0.232006113 container remove 4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 21 14:25:52 compute-0 systemd[1]: libpod-conmon-4a6b6141f2f61f6c73d5194dd32053fdb7806bcff2f485537428810117be14ea.scope: Deactivated successfully.
Jan 21 14:25:52 compute-0 podman[256860]: 2026-01-21 14:25:52.443512256 +0000 UTC m=+0.051506893 container create 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 14:25:52 compute-0 systemd[1]: Started libpod-conmon-894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5.scope.
Jan 21 14:25:52 compute-0 podman[256860]: 2026-01-21 14:25:52.418964179 +0000 UTC m=+0.026958856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:25:52 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:52 compute-0 podman[256860]: 2026-01-21 14:25:52.534907088 +0000 UTC m=+0.142901755 container init 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 21 14:25:52 compute-0 podman[256860]: 2026-01-21 14:25:52.544276516 +0000 UTC m=+0.152271153 container start 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 21 14:25:52 compute-0 podman[256860]: 2026-01-21 14:25:52.548845677 +0000 UTC m=+0.156840344 container attach 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:25:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:53 compute-0 blissful_maxwell[256877]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:25:53 compute-0 blissful_maxwell[256877]: --> All data devices are unavailable
Jan 21 14:25:53 compute-0 systemd[1]: libpod-894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5.scope: Deactivated successfully.
Jan 21 14:25:53 compute-0 podman[256860]: 2026-01-21 14:25:53.041055874 +0000 UTC m=+0.649050541 container died 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 21 14:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3ca921667abd0a25ff82e092c013394689eb34c7e74989e9942a7c8b3dd8f11-merged.mount: Deactivated successfully.
Jan 21 14:25:53 compute-0 podman[256860]: 2026-01-21 14:25:53.106845674 +0000 UTC m=+0.714840311 container remove 894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:25:53 compute-0 systemd[1]: libpod-conmon-894285dcd9edca711edf68b14d5616b056fa8375c58c2b7928d1d3ace76662b5.scope: Deactivated successfully.
Jan 21 14:25:53 compute-0 sudo[256778]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:53 compute-0 sudo[256910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:25:53 compute-0 sudo[256910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:53 compute-0 sudo[256910]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:53 compute-0 sudo[256935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:25:53 compute-0 sudo[256935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:53 compute-0 podman[256972]: 2026-01-21 14:25:53.543390909 +0000 UTC m=+0.039902591 container create d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:25:53 compute-0 systemd[1]: Started libpod-conmon-d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517.scope.
Jan 21 14:25:53 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:25:53 compute-0 podman[256972]: 2026-01-21 14:25:53.618090245 +0000 UTC m=+0.114601937 container init d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:25:53 compute-0 podman[256972]: 2026-01-21 14:25:53.52571566 +0000 UTC m=+0.022227332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:25:53 compute-0 podman[256972]: 2026-01-21 14:25:53.628770245 +0000 UTC m=+0.125281897 container start d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:25:53 compute-0 hopeful_chatelet[256989]: 167 167
Jan 21 14:25:53 compute-0 podman[256972]: 2026-01-21 14:25:53.63225308 +0000 UTC m=+0.128764752 container attach d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 21 14:25:53 compute-0 systemd[1]: libpod-d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517.scope: Deactivated successfully.
Jan 21 14:25:53 compute-0 podman[256994]: 2026-01-21 14:25:53.674783104 +0000 UTC m=+0.027649244 container died d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 14:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0b3d84df63104fa5f60abd8ddb823a0d10b598b624cd73827e5c3af161851ac-merged.mount: Deactivated successfully.
Jan 21 14:25:53 compute-0 podman[256994]: 2026-01-21 14:25:53.714424147 +0000 UTC m=+0.067290267 container remove d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:25:53 compute-0 systemd[1]: libpod-conmon-d4ad5c96f445b3fe56f99ebc878cddcdf3124100671fe535a6a5f29d18635517.scope: Deactivated successfully.
Jan 21 14:25:53 compute-0 ceph-mon[75031]: pgmap v1398: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:53.938227249 +0000 UTC m=+0.037273136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:54.358818366 +0000 UTC m=+0.457864183 container create d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 21 14:25:54 compute-0 systemd[1]: Started libpod-conmon-d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932.scope.
Jan 21 14:25:54 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:54.451825827 +0000 UTC m=+0.550871654 container init d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:54.460736234 +0000 UTC m=+0.559782041 container start d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:54.465316956 +0000 UTC m=+0.564362783 container attach d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 21 14:25:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:54 compute-0 nova_compute[239261]: 2026-01-21 14:25:54.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:54 compute-0 nova_compute[239261]: 2026-01-21 14:25:54.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:25:54 compute-0 nova_compute[239261]: 2026-01-21 14:25:54.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:25:54 compute-0 nova_compute[239261]: 2026-01-21 14:25:54.740 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:25:54 compute-0 nova_compute[239261]: 2026-01-21 14:25:54.740 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:54 compute-0 nova_compute[239261]: 2026-01-21 14:25:54.741 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:54 compute-0 focused_liskov[257032]: {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:     "0": [
Jan 21 14:25:54 compute-0 focused_liskov[257032]:         {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "devices": [
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "/dev/loop3"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             ],
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_name": "ceph_lv0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_size": "21470642176",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "name": "ceph_lv0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "tags": {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cluster_name": "ceph",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.crush_device_class": "",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.encrypted": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.objectstore": "bluestore",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osd_id": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.type": "block",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.vdo": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.with_tpm": "0"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             },
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "type": "block",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "vg_name": "ceph_vg0"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:         }
Jan 21 14:25:54 compute-0 focused_liskov[257032]:     ],
Jan 21 14:25:54 compute-0 focused_liskov[257032]:     "1": [
Jan 21 14:25:54 compute-0 focused_liskov[257032]:         {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "devices": [
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "/dev/loop4"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             ],
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_name": "ceph_lv1",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_size": "21470642176",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "name": "ceph_lv1",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "tags": {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cluster_name": "ceph",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.crush_device_class": "",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.encrypted": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.objectstore": "bluestore",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osd_id": "1",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.type": "block",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.vdo": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.with_tpm": "0"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             },
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "type": "block",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "vg_name": "ceph_vg1"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:         }
Jan 21 14:25:54 compute-0 focused_liskov[257032]:     ],
Jan 21 14:25:54 compute-0 focused_liskov[257032]:     "2": [
Jan 21 14:25:54 compute-0 focused_liskov[257032]:         {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "devices": [
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "/dev/loop5"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             ],
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_name": "ceph_lv2",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_size": "21470642176",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "name": "ceph_lv2",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "tags": {
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.cluster_name": "ceph",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.crush_device_class": "",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.encrypted": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.objectstore": "bluestore",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osd_id": "2",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.type": "block",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.vdo": "0",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:                 "ceph.with_tpm": "0"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             },
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "type": "block",
Jan 21 14:25:54 compute-0 focused_liskov[257032]:             "vg_name": "ceph_vg2"
Jan 21 14:25:54 compute-0 focused_liskov[257032]:         }
Jan 21 14:25:54 compute-0 focused_liskov[257032]:     ]
Jan 21 14:25:54 compute-0 focused_liskov[257032]: }
Jan 21 14:25:54 compute-0 systemd[1]: libpod-d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932.scope: Deactivated successfully.
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:54.788890303 +0000 UTC m=+0.887936130 container died d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 21 14:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4776ea17d4be847d358f972ab6dd510bac4cbed03d20bc5e056574d2314ea2c-merged.mount: Deactivated successfully.
Jan 21 14:25:54 compute-0 podman[257016]: 2026-01-21 14:25:54.831295584 +0000 UTC m=+0.930341391 container remove d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:25:54 compute-0 systemd[1]: libpod-conmon-d41c4cd044997359fd2f8bac334ec19caa17060c1d98f8edf0536f102b7bd932.scope: Deactivated successfully.
Jan 21 14:25:54 compute-0 sudo[256935]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:54 compute-0 sudo[257053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:25:54 compute-0 sudo[257053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:54 compute-0 sudo[257053]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:54 compute-0 sudo[257078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:25:55 compute-0 sudo[257078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.306633161 +0000 UTC m=+0.047618268 container create 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:25:55 compute-0 systemd[1]: Started libpod-conmon-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope.
Jan 21 14:25:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.378486269 +0000 UTC m=+0.119471396 container init 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.286696527 +0000 UTC m=+0.027681654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.385855718 +0000 UTC m=+0.126840835 container start 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.389778234 +0000 UTC m=+0.130763371 container attach 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 21 14:25:55 compute-0 elegant_galileo[257130]: 167 167
Jan 21 14:25:55 compute-0 systemd[1]: libpod-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope: Deactivated successfully.
Jan 21 14:25:55 compute-0 conmon[257130]: conmon 677ec88b2bad3cee6012 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope/container/memory.events
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.392291615 +0000 UTC m=+0.133276722 container died 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 21 14:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c464049208fd11ed77a30787764bcaf3c260a0a99334443f8166bc599a4f2f11-merged.mount: Deactivated successfully.
Jan 21 14:25:55 compute-0 podman[257114]: 2026-01-21 14:25:55.432020201 +0000 UTC m=+0.173005308 container remove 677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 21 14:25:55 compute-0 systemd[1]: libpod-conmon-677ec88b2bad3cee6012a62c699d2f153611d362a9c34098b9baa6f67a67f30c.scope: Deactivated successfully.
Jan 21 14:25:55 compute-0 podman[257133]: 2026-01-21 14:25:55.482668622 +0000 UTC m=+0.099659894 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 21 14:25:55 compute-0 podman[257142]: 2026-01-21 14:25:55.512441546 +0000 UTC m=+0.089381344 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 21 14:25:55 compute-0 podman[257195]: 2026-01-21 14:25:55.589229453 +0000 UTC m=+0.037592835 container create 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 21 14:25:55 compute-0 systemd[1]: Started libpod-conmon-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope.
Jan 21 14:25:55 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:25:55 compute-0 podman[257195]: 2026-01-21 14:25:55.574438064 +0000 UTC m=+0.022801476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:25:55 compute-0 podman[257195]: 2026-01-21 14:25:55.673487851 +0000 UTC m=+0.121851253 container init 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 21 14:25:55 compute-0 podman[257195]: 2026-01-21 14:25:55.679466728 +0000 UTC m=+0.127830110 container start 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 21 14:25:55 compute-0 podman[257195]: 2026-01-21 14:25:55.685082134 +0000 UTC m=+0.133445526 container attach 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.749 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.782 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.783 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.783 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.783 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:25:55 compute-0 nova_compute[239261]: 2026-01-21 14:25:55.784 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:25:55 compute-0 ceph-mon[75031]: pgmap v1399: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:56 compute-0 lvm[257311]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:25:56 compute-0 lvm[257311]: VG ceph_vg1 finished
Jan 21 14:25:56 compute-0 lvm[257310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:25:56 compute-0 lvm[257310]: VG ceph_vg0 finished
Jan 21 14:25:56 compute-0 lvm[257313]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:25:56 compute-0 lvm[257313]: VG ceph_vg2 finished
Jan 21 14:25:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:25:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2568593483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:25:56 compute-0 lvm[257314]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:25:56 compute-0 lvm[257314]: VG ceph_vg1 finished
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.430 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:25:56 compute-0 fervent_yonath[257212]: {}
Jan 21 14:25:56 compute-0 systemd[1]: libpod-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope: Deactivated successfully.
Jan 21 14:25:56 compute-0 systemd[1]: libpod-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope: Consumed 1.323s CPU time.
Jan 21 14:25:56 compute-0 podman[257195]: 2026-01-21 14:25:56.503835122 +0000 UTC m=+0.952198514 container died 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:25:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc4d6f27e176b21e4fb557651a784060fcfe6c61aa602663c2849ca9398b5960-merged.mount: Deactivated successfully.
Jan 21 14:25:56 compute-0 podman[257195]: 2026-01-21 14:25:56.559535706 +0000 UTC m=+1.007899088 container remove 5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 21 14:25:56 compute-0 systemd[1]: libpod-conmon-5c3f805a7e082cfddf803562165b04e4a8b5b2fb939429b2cdacd5eb86d463aa.scope: Deactivated successfully.
Jan 21 14:25:56 compute-0 sudo[257078]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:25:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:25:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:25:56 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.658 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.660 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4936MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.660 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.661 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:25:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:56 compute-0 sudo[257332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:25:56 compute-0 sudo[257332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:25:56 compute-0 sudo[257332]: pam_unix(sudo:session): session closed for user root
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.750 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.750 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:25:56 compute-0 nova_compute[239261]: 2026-01-21 14:25:56.773 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:25:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2568593483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:25:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:25:56 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:25:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:25:57 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141909906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:25:57 compute-0 nova_compute[239261]: 2026-01-21 14:25:57.334 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:25:57 compute-0 nova_compute[239261]: 2026-01-21 14:25:57.341 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:25:57 compute-0 nova_compute[239261]: 2026-01-21 14:25:57.365 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:25:57 compute-0 nova_compute[239261]: 2026-01-21 14:25:57.366 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:25:57 compute-0 nova_compute[239261]: 2026-01-21 14:25:57.366 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:25:57 compute-0 ceph-mon[75031]: pgmap v1400: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:57 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/141909906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:25:58 compute-0 nova_compute[239261]: 2026-01-21 14:25:58.342 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:58 compute-0 nova_compute[239261]: 2026-01-21 14:25:58.342 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:58 compute-0 nova_compute[239261]: 2026-01-21 14:25:58.343 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:25:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:25:58 compute-0 nova_compute[239261]: 2026-01-21 14:25:58.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:25:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:25:59 compute-0 ceph-mon[75031]: pgmap v1401: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:01 compute-0 nova_compute[239261]: 2026-01-21 14:26:01.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:01 compute-0 ceph-mon[75031]: pgmap v1402: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:04 compute-0 ceph-mon[75031]: pgmap v1403: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:05 compute-0 ceph-mon[75031]: pgmap v1404: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:06 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:07 compute-0 ceph-mon[75031]: pgmap v1405: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:08 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:09 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:10 compute-0 ceph-mon[75031]: pgmap v1406: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:10 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:26:11 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:26:11 compute-0 ceph-mon[75031]: pgmap v1407: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:26:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:26:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:26:12 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:26:12 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:13 compute-0 ceph-mon[75031]: pgmap v1408: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:14 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:14 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:15 compute-0 ceph-mon[75031]: pgmap v1409: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:16 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:17 compute-0 ceph-mon[75031]: pgmap v1410: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:18 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:19 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:19 compute-0 ceph-mon[75031]: pgmap v1411: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:20 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:21 compute-0 ceph-mon[75031]: pgmap v1412: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:22 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 21 14:26:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2021888656' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:26:23 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 21 14:26:23 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2021888656' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:26:23 compute-0 ceph-mon[75031]: pgmap v1413: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2021888656' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 21 14:26:23 compute-0 ceph-mon[75031]: from='client.? 192.168.122.10:0/2021888656' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 21 14:26:24 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:24 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:25 compute-0 ceph-mon[75031]: pgmap v1414: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:26 compute-0 podman[257380]: 2026-01-21 14:26:26.329469603 +0000 UTC m=+0.056254398 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 21 14:26:26 compute-0 podman[257379]: 2026-01-21 14:26:26.385434624 +0000 UTC m=+0.112277371 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 21 14:26:26 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:27 compute-0 sshd-session[257425]: Accepted publickey for zuul from 192.168.122.10 port 37848 ssh2: ECDSA SHA256:gMvMoT7AZPyICOlNUofDHLZdzcDsG5M/w6K3bI6p4sk
Jan 21 14:26:27 compute-0 systemd-logind[780]: New session 52 of user zuul.
Jan 21 14:26:27 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 21 14:26:27 compute-0 sshd-session[257425]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 21 14:26:27 compute-0 sudo[257429]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 21 14:26:27 compute-0 sudo[257429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 21 14:26:27 compute-0 ceph-mon[75031]: pgmap v1415: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:28 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:29 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:30 compute-0 ceph-mon[75031]: pgmap v1416: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14558 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:30 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:30 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:31 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 21 14:26:31 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092945474' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 21 14:26:32 compute-0 ceph-mon[75031]: from='client.14558 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:32 compute-0 ceph-mon[75031]: pgmap v1417: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:32 compute-0 ceph-mon[75031]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:32 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2092945474' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 21 14:26:32 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:26:33.919 155179 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:26:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:26:33.921 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:26:33 compute-0 ovn_metadata_agent[155169]: 2026-01-21 14:26:33.921 155179 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:26:34 compute-0 ceph-mon[75031]: pgmap v1418: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:34 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:34 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:36 compute-0 ceph-mon[75031]: pgmap v1419: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:36 compute-0 ovs-vsctl[257756]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 21 14:26:36 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:37 compute-0 virtqemud[238983]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 21 14:26:37 compute-0 virtqemud[238983]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 21 14:26:37 compute-0 virtqemud[238983]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 21 14:26:38 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: cache status {prefix=cache status} (starting...)
Jan 21 14:26:38 compute-0 ceph-mon[75031]: pgmap v1420: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:38 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: client ls {prefix=client ls} (starting...)
Jan 21 14:26:38 compute-0 lvm[258091]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:26:38 compute-0 lvm[258091]: VG ceph_vg2 finished
Jan 21 14:26:38 compute-0 lvm[258100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:26:38 compute-0 lvm[258100]: VG ceph_vg0 finished
Jan 21 14:26:38 compute-0 lvm[258103]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:26:38 compute-0 lvm[258103]: VG ceph_vg1 finished
Jan 21 14:26:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:38 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:38 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: damage ls {prefix=damage ls} (starting...)
Jan 21 14:26:38 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14566 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:38 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump loads {prefix=dump loads} (starting...)
Jan 21 14:26:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 21 14:26:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 21 14:26:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 21 14:26:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304865984' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 21 14:26:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 21 14:26:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Optimize plan auto_2026-01-21_14:26:39
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: [balancer INFO root] do_upmap
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: [balancer INFO root] prepared 0/10 upmap changes
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:39 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:26:39.792+0000 7fc546f36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 14:26:39 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 14:26:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 21 14:26:39 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:26:39 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274070401' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:26:39 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 21 14:26:40 compute-0 ceph-mon[75031]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:40 compute-0 ceph-mon[75031]: pgmap v1421: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:40 compute-0 ceph-mon[75031]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:40 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3304865984' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 21 14:26:40 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2274070401' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:26:40 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: ops {prefix=ops} (starting...)
Jan 21 14:26:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 21 14:26:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885433264' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 21 14:26:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 21 14:26:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328399773' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 21 14:26:40 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:40 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: session ls {prefix=session ls} (starting...)
Jan 21 14:26:40 compute-0 ceph-mds[95704]: mds.cephfs.compute-0.ddixwa asok_command: status {prefix=status} (starting...)
Jan 21 14:26:40 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 21 14:26:40 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383166530' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50a4a37f0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50adb2be0>)]
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:26:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 21 14:26:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484631162' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/885433264' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/328399773' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3383166530' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3484631162' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 21 14:26:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 21 14:26:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990766666' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:41 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14590 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:41 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 21 14:26:41 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958258899' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 21 14:26:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:26:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: []
Jan 21 14:26:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] scanning for idle connections..
Jan 21 14:26:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fc50ad37ac0>)]
Jan 21 14:26:42 compute-0 ceph-mgr[75322]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 21 14:26:42 compute-0 ceph-mon[75031]: pgmap v1422: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:42 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1990766666' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 21 14:26:42 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/958258899' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 21 14:26:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 21 14:26:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606940983' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 21 14:26:42 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 14:26:42 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434642654' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 21 14:26:42 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 21 14:26:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361236892' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 21 14:26:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588299280' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.tnwklj(active, since 42m)
Jan 21 14:26:43 compute-0 ceph-mon[75031]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1606940983' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3434642654' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1361236892' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 21 14:26:43 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3588299280' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 21 14:26:43 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14600 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:43 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:26:43.518+0000 7fc546f36640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 14:26:43 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 21 14:26:43 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 21 14:26:43 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263492334' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 21 14:26:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 21 14:26:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666518825' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 21 14:26:44 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:44 compute-0 ceph-mon[75031]: pgmap v1423: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 21 14:26:44 compute-0 ceph-mon[75031]: mgrmap e23: compute-0.tnwklj(active, since 42m)
Jan 21 14:26:44 compute-0 ceph-mon[75031]: from='client.14600 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:44 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3263492334' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 21 14:26:44 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1666518825' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 21 14:26:44 compute-0 ceph-mon[75031]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:06.177958+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:07.178156+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:08.178299+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:09.178504+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:10.178739+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:11.178984+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:12.179134+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:13.179278+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 49152 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:14.179459+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:15.179603+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:16.179730+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:17.179959+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:18.180087+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 32768 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:19.180298+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:20.180430+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:21.180648+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 24576 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:22.180854+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 16384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:23.181049+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 16384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:24.181220+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 8192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:25.181373+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 8192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:26.181619+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:27.181750+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 0 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:28.181935+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1040384 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:29.182059+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:30.182215+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1032192 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:31.182486+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:32.182649+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1015808 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:33.182873+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:34.183026+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:35.183204+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:36.183397+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1007616 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:37.183613+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:38.183758+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 999424 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:39.183904+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:40.184082+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:41.184252+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 991232 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:42.184389+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:43.184594+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 983040 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:44.184718+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:45.184824+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:46.185205+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 974848 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:47.185357+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:48.185539+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 966656 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:49.185718+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:50.185861+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:51.186527+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:52.186603+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 958464 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:53.186734+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:54.186946+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:55.187101+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 950272 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:56.187246+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:57.187390+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:58.187530+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:59.187688+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:00.187869+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 925696 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:01.188092+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:02.188266+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 917504 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:03.188449+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:04.188676+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:05.188880+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:06.189041+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:07.189193+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:08.189400+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 901120 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:09.189538+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:10.189679+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 892928 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:11.189927+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:12.190079+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:13.190235+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:14.190360+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:15.190512+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:16.190657+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 860160 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:17.191441+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:18.192021+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:19.192180+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:20.192730+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:21.193249+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:22.193425+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:23.193584+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:24.193740+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:25.193860+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:26.194131+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:27.194326+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:28.194500+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:29.194610+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:30.194750+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 802816 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:31.194908+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:32.195076+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:33.195199+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 794624 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:34.195340+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:35.195472+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 786432 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:36.195609+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:37.195791+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:38.195952+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 778240 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:39.196137+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:40.196276+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:41.196467+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 770048 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:42.196604+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:43.196865+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:44.197033+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 761856 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:45.197218+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:46.197341+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 753664 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:47.197426+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:48.197606+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:49.197726+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 745472 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:50.197852+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:51.197991+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:52.198143+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 737280 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:53.198284+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 729088 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:54.198430+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:55.198662+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:56.198749+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:57.198918+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:58.199097+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:59.199260+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:00.199507+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:01.199845+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:02.200170+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 696320 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:03.200408+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:04.200586+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:05.200792+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:06.200990+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5492 writes, 23K keys, 5492 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s
                                           Interval WAL: 5492 writes, 812 syncs, 6.76 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:07.201151+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:08.201405+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:09.201694+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:10.203765+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:11.204014+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:12.204210+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:13.204441+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:14.204650+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:15.204872+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:16.204997+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:17.205183+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:18.205329+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:19.205484+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:20.205666+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 573440 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:21.205851+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:22.206002+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:23.206155+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 565248 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:24.206282+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:25.206506+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:26.206756+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:27.207105+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:28.207289+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:29.207465+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:30.207718+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:31.207935+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 540672 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:32.208163+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:33.208420+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 532480 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:34.208687+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:35.208948+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:36.209228+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 524288 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:37.209468+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:38.209757+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:39.210054+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:40.210310+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:41.210616+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 507904 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:42.210865+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:43.211120+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 499712 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:44.211369+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:45.211724+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:46.211912+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:47.212077+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:48.212300+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:49.212772+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:50.213001+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 483328 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:51.213278+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:52.213486+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:53.213747+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:54.213958+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 450560 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:55.214184+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:56.214505+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:57.214715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 442368 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:58.215067+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 434176 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 271.400299072s of 271.403503418s, submitted: 2
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:59.215350+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935662 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 434176 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:00.215703+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [0,0,1])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 286720 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:01.215935+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 1277952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:02.216144+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:03.216340+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:04.216715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:05.216925+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:06.217080+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:07.217244+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:08.217391+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:09.217575+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:10.217751+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:11.217946+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:12.218125+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:13.218354+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1196032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:14.218612+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:15.218921+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:16.219240+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:17.219458+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:18.219693+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:19.219933+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:20.220131+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:21.220403+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:22.220693+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:23.220970+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:24.221180+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:25.221385+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:26.221664+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:27.221865+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:28.222075+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:29.222374+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:30.222639+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:31.222909+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:32.223176+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:33.223434+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:34.223705+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:35.223913+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:36.224090+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:37.224301+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:38.224674+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:39.224881+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:40.225103+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:41.225462+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:42.225912+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:43.226136+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:44.226322+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:45.226533+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:46.226958+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:47.227160+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:48.227385+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:49.227658+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:50.227848+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:51.228159+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:52.228340+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:53.228470+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:54.228618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:55.228747+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:56.228922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:57.229160+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:58.229461+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:59.229627+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:00.229756+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:01.229919+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:02.230060+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:03.230237+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:04.230404+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:05.230594+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:06.230739+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:07.230922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:08.231070+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:09.231221+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1040384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:10.231358+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1040384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:11.231584+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:12.231820+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:13.231992+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:14.232330+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:15.232456+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:16.232615+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:17.232888+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:18.233005+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:19.233154+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:20.233332+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:21.233654+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:22.233898+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:23.234240+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:24.234449+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:25.234653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:26.234782+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:27.234928+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:28.235061+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:29.235228+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:30.235434+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:31.235656+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:32.235801+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:33.235948+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:34.236145+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:35.236385+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:36.236596+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:37.236770+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:38.237033+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:39.237239+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:40.237466+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:41.237687+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:42.237869+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 983040 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:43.238102+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:44.238306+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:45.238486+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:46.238755+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:47.239026+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:48.239208+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:49.239440+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:50.239651+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:51.239872+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:52.240053+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:53.240184+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:54.240290+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:55.240413+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:56.240546+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:57.240709+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:58.240852+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:59.240983+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:00.241327+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:01.241494+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 958464 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:02.241684+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:03.241825+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:04.241952+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:05.242133+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:06.242297+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:07.242526+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:08.242657+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:09.242844+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:10.242968+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:11.243145+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:12.243339+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:13.243463+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:14.243613+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:15.243795+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:16.243922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:17.244072+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:18.244251+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:19.244423+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:20.244664+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:21.244960+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:22.245151+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:23.245296+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:24.245495+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:25.245644+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:26.245765+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:27.245922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:28.246039+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:29.246178+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:30.246294+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:31.246506+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:32.246643+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:33.246760+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:34.246927+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:35.247106+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:36.247386+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:37.247742+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 917504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:38.247873+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 917504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:39.248049+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 917504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:40.248190+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:41.248386+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:42.248533+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:43.248653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:44.248795+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:45.248953+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:46.249111+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:47.249251+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:48.249417+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:49.249544+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:50.249746+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:51.249900+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:52.250070+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:53.250217+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:54.250409+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:55.250592+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:56.250734+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:57.250919+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:58.251066+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:59.251483+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:00.251717+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:01.252001+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:02.252192+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:03.252394+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:04.252620+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:05.252757+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:06.252911+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:07.253065+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:08.253231+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:09.253397+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:10.253551+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:11.253822+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:12.254005+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:13.254198+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:14.254453+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:15.254600+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:16.254814+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:17.255080+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:18.255315+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:19.255463+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:20.255625+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:21.255780+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:22.255927+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:23.256047+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:24.256278+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:25.256414+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:26.256549+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:27.256683+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:28.256814+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:29.256946+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:30.257091+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:31.257261+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:32.257385+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:33.257512+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:34.257668+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:35.257829+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:36.258029+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:37.258148+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:38.258366+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:39.258490+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:40.258718+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:41.258891+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:42.259028+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:43.259224+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:44.259430+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:45.259685+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:46.259892+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:47.260075+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:48.260205+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:49.260343+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:50.260475+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:51.260659+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:52.260806+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:53.260976+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:54.261189+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:55.261485+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:56.261695+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:57.261873+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:58.262060+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:59.262268+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:00.262513+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:01.262826+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:02.263049+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:03.263317+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:04.263658+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:05.263917+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:06.264196+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:07.264476+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:08.264831+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:09.265037+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:10.265185+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:11.265380+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:12.265591+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:13.265778+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc ms_handle_reset ms_handle_reset con 0x557951a20000
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: get_auth_request con 0x5579519be400 auth_method 0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_configure stats_period=5
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:14.265945+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:15.266128+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:16.266292+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:17.266457+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:18.266588+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:19.266691+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:20.266816+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:21.266964+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:22.267147+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:23.267315+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:24.445909+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:25.446061+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:26.446176+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:27.446290+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:28.446412+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:29.446631+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:30.446760+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:31.446925+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:32.447061+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:33.447240+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:34.447397+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:35.447583+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:36.447784+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:37.447960+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:38.448181+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:39.448387+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:40.448534+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:41.449027+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:42.449149+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:43.449290+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:44.449411+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:45.449606+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:46.449775+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:47.449892+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:48.450018+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:49.450129+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:50.450254+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:51.450473+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:52.450613+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:53.450755+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:54.451052+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:55.451174+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935575 data_alloc: 218103808 data_used: 5012
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:56.451294+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:57.451461+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:58.451604+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:59.451716+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 298.759918213s of 300.480377197s, submitted: 90
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x55795377fc00
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 311296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:00.451868+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:01.452072+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:02.452230+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:03.452359+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:04.452486+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:05.452618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:06.452761+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:07.452924+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:08.453147+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:09.453308+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:10.453492+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:11.453802+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:12.453988+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:13.454117+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:14.454242+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:15.454407+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:16.454607+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:17.454761+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:18.454946+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:19.455170+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:20.455332+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:21.455540+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:22.455746+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:23.455917+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:24.456055+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:25.456232+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:26.456376+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:27.456523+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:28.456682+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:29.456829+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:30.456953+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:31.457097+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:32.457261+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:33.457460+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:34.457600+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:35.457719+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:36.457866+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:37.458019+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:38.458141+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:39.458268+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:40.458455+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:41.458633+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:42.458777+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:43.458953+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:44.459102+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:45.459265+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:46.459393+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:47.459605+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:48.459736+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:49.459869+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:50.460062+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:51.460914+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:52.461042+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:53.461190+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:54.461348+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:55.461513+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:56.461685+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:57.461844+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:58.461982+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:59.462100+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:00.462267+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:01.462476+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:02.462653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:03.462797+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:04.462969+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:05.463117+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:06.463333+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:07.463517+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:08.463650+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:09.463831+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:10.463962+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:11.464130+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:12.464341+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:13.464516+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:14.464640+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:15.464809+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:16.464984+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:17.465123+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:18.465266+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:19.465399+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:20.465527+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:21.465692+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:22.465907+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:23.958434+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:24.958593+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:25.958774+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:26.958922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:27.959055+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:28.959169+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:29.959310+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:30.959426+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:31.959568+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:32.959703+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:33.959881+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:34.960008+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:35.960145+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:36.960266+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:37.960376+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:38.960488+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:39.960607+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:40.960798+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:41.961009+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:42.961184+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:43.961361+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:44.961542+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:45.961720+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:46.961849+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:47.961929+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:48.962069+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:49.962209+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:50.962338+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:51.962523+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:52.962651+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:53.962854+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:54.962997+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:55.963196+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:56.963338+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:57.963505+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:58.963715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:59.963833+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:00.963965+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:01.964116+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:02.964245+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:03.964365+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:04.964503+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:05.964722+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:06.967513+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:07.967655+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:08.967802+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:09.968003+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:10.968173+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:11.968644+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:12.968793+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:13.969025+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:14.969204+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:15.969429+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:16.969651+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:17.969776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:18.969943+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:19.970108+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:20.970494+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:21.970724+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:22.971080+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:23.971216+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:24.971405+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:25.971642+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:26.971808+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:27.971943+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:28.972126+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:29.972310+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:30.972517+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:31.972771+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:32.972901+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:33.973034+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:34.973156+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:35.973313+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:36.973490+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:37.973653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:38.973799+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:39.974008+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:40.974191+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:41.974371+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:42.974514+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:43.974628+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:44.974936+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:45.975132+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:46.975341+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:47.975504+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:48.975725+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:49.975914+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:50.976079+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:51.976326+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:52.976445+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:53.976634+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:54.976751+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:55.976942+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:56.977127+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:57.977270+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:58.977387+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:59.977586+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:00.977764+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:01.977942+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:02.978112+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:03.978384+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:04.978618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:05.978868+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:06.978998+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread fragmentation_score=0.000135 took=0.000023s
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:07.979121+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:08.979316+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:09.979470+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:10.979626+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:11.979791+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:12.979953+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:13.980131+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:14.980311+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:15.980436+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:16.980616+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:17.980798+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:18.980965+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:19.981187+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:20.981384+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:21.981588+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:22.981760+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:23.981959+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:24.982202+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:25.982409+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 65536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:26.982624+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:27.982777+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:28.982914+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:29.983062+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:30.983257+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:31.983509+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:32.983674+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:33.983825+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:34.984004+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:35.984242+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:36.984449+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:37.984644+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:38.984800+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:39.984994+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:40.985143+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 49152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:41.985359+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:42.985541+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:43.985729+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:44.985955+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:45.986135+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 40960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:46.986356+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:47.986528+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:48.986705+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:49.986866+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:50.986991+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:51.987151+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:52.987371+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:53.987549+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:54.987800+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:55.987963+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:56.988110+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:57.988277+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:58.988442+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:59.988609+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:00.988742+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:01.988882+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:02.989042+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:03.989216+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:04.989353+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:05.989549+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 32768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:06.989810+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5720 writes, 24K keys, 5720 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5720 writes, 926 syncs, 6.18 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.030       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca74b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55794fca7a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:07.989980+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:08.990209+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:09.990370+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:10.990624+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:11.990850+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:12.990991+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:13.991100+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:14.991287+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:15.997831+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:16.997965+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:17.998075+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:18.998239+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:19.998385+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:20.998520+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:21.998713+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:22.998893+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:23.999018+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:24.999202+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:25.999387+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:26.999690+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 0 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:27.999853+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:29.000011+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:30.000323+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:31.000486+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:32.000642+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:33.000790+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:34.000965+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:35.001155+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:36.001371+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:37.001483+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:38.001639+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:39.001798+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:40.001943+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:41.002107+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:42.002325+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:43.002502+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:44.002681+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:45.002845+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:46.003041+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:47.003183+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:48.003345+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:49.003479+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:50.003654+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:51.009704+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:52.009870+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:53.010058+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:54.010257+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:55.010438+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:56.010636+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:57.010792+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:58.010933+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:59.011102+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.046539307s of 300.132598877s, submitted: 24
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:00.011240+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 720896 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:01.011408+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:02.011598+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:03.011746+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:04.011838+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:05.012069+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:06.012281+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:07.012418+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:08.012638+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:09.012824+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:10.013010+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:11.013150+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:12.013300+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:13.013482+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:14.013609+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:15.013806+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:16.014011+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:17.014200+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:18.014413+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:19.014537+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:20.014752+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:21.014930+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:22.015247+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:23.015390+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:24.015523+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:25.015717+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:26.015908+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:27.016073+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:28.016216+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:29.016334+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:30.016436+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:31.016616+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:32.016795+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:33.016952+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:34.017159+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:35.017332+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:36.017468+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:37.017630+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:38.017818+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:39.017963+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:40.018214+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:41.018457+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:42.018791+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:43.018952+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:44.019084+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:45.019227+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:46.019432+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:47.019635+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:48.019806+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:49.019993+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:50.020176+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:51.020304+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:52.020455+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:53.020625+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:54.020887+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:55.021067+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:56.021260+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:57.021499+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:58.021657+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:59.021801+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:00.021948+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:01.022136+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:02.022322+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:03.022502+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:04.022670+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:05.022843+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:06.023061+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:07.023215+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:08.023327+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:09.023497+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:10.023704+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:11.023854+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:12.024063+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:13.024272+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:14.024445+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:15.024617+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:16.024790+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:17.024948+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:18.025104+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:19.025260+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:20.025432+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:21.025606+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:22.025833+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:23.025982+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:24.026118+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:25.026275+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:26.026412+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:27.026637+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:28.027011+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:29.027272+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:30.027463+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:31.027652+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:32.028043+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:33.028194+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:34.028345+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:35.028500+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:36.028614+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:37.028780+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:38.028948+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:39.029137+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:40.029308+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:41.029502+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:42.029738+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:43.029928+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:44.030137+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:45.030287+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:46.030416+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:47.030540+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:48.030726+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:49.030892+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:50.031075+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:51.031278+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:52.031496+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:53.031606+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:54.031786+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:55.031933+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:56.032071+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:57.032190+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:58.032333+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:59.032501+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:00.032630+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:01.032758+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:02.032942+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:03.033148+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:04.033323+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:05.033490+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:06.033662+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:07.033798+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:08.033942+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:09.034058+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:10.034260+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 573440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:11.034429+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:12.034658+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:13.034832+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:14.034977+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:15.035068+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:16.035249+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:17.035447+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:18.035612+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:19.035804+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:20.035928+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:21.036089+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:22.036242+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:23.036442+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:24.036681+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:25.036842+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:26.037056+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:27.037254+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:28.037424+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:29.037637+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:30.037821+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:31.037978+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:32.038174+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:33.038366+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:34.038547+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:35.038759+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:36.038935+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:37.039084+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:38.039251+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:39.039449+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:40.039618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:41.039795+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:42.039948+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:43.040098+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:44.040229+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:45.040378+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:46.040509+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:47.040713+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:48.040917+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:49.041100+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:50.041262+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:51.041480+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:52.041653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:53.041834+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:54.042049+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:55.042231+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:56.042414+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:57.042673+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:58.042832+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:59.043037+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:00.043215+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:01.043405+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:02.043659+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:03.043848+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:04.043985+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:05.044131+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:06.044290+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:07.044429+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:08.044630+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:09.044852+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:10.045003+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:11.045220+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:12.045400+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:13.045541+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:14.045720+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb9000/0x0/0x4ffc00000, data 0xb7d2c/0x173000, compress 0x0/0x0/0x0, omap 0xaa6a, meta 0x2bc5596), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:15.045878+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:16.046045+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:17.046188+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936215 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x55795377ec00
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:18.046358+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 198.138137817s of 198.749801636s, submitted: 90
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 393216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:19.046535+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 335872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:20.046726+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 125 ms_handle_reset con 0x55795377ec00 session 0x557951772540
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6b1000/0x0/0x4ffc00000, data 0x8bb4b8/0x979000, compress 0x0/0x0/0x0, omap 0xc572, meta 0x2bc3a8e), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 16908288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:21.046880+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 16908288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:22.047080+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992128 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 16875520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:23.047214+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 16875520 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:24.047366+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16850944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:25.047529+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 16850944 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6ab000/0x0/0x4ffc00000, data 0x8bd093/0x97d000, compress 0x0/0x0/0x0, omap 0xe84a, meta 0x2bc17b6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x55795244d000
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:26.047691+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 16678912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:27.048101+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056048 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 25026560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 126 ms_handle_reset con 0x55795244d000 session 0x557951f8ac40
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:28.048270+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25010176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:29.048480+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 25010176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:30.048720+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x152ec6e/0x15f1000, compress 0x0/0x0/0x0, omap 0xe95c, meta 0x2bc16a4), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:31.048855+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x152ec6e/0x15f1000, compress 0x0/0x0/0x0, omap 0xe95c, meta 0x2bc16a4), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.134987831s of 12.674633980s, submitted: 36
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:32.049040+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:33.049203+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:34.049376+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:35.049631+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:36.049788+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:37.049961+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:38.050133+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:39.050327+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:40.050494+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:41.050662+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:42.050871+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:43.051123+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:44.051838+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:45.051970+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76718080 unmapped: 25001984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:46.052124+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:47.052273+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063912 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:48.052403+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:49.052543+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:50.052738+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:51.052905+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 24993792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:52.053049+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x5579543f6400
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.509849548s of 21.520481110s, submitted: 13
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065604 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 24846336 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:53.053194+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 12
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 24805376 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:54.053323+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 24805376 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:55.053465+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:56.053597+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:57.053701+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068268 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:58.053826+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:59.053942+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:00.054094+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 24756224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 13
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:01.054224+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x5579540e5800
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 24551424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:02.054403+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067262 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059196472s of 10.092103004s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:03.054640+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:04.054836+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15308be/0x15f7000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:05.054961+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:06.055111+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:07.055258+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067390 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:08.055373+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:09.055531+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:10.056281+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 24535040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:11.056418+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 24518656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:12.056621+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065698 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 24518656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:13.056773+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.215215683s of 10.411604881s, submitted: 5
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:14.056883+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:15.057047+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:16.057193+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530823/0x15f6000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:17.057330+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067390 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:18.057442+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:19.057663+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:20.057797+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:21.057951+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:22.059515+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:23.060694+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:24.061633+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:25.062843+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:26.063067+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:27.063918+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:28.065836+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:29.067487+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:30.067842+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:31.068026+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:32.068256+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x1530788/0x15f5000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:33.068550+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:34.068888+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:35.069139+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:36.069376+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:37.069797+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066672 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:38.070108+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.363595963s of 25.367301941s, submitted: 2
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fba38000/0x0/0x4ffc00000, data 0x15306ed/0x15f4000, compress 0x0/0x0/0x0, omap 0xe9a9, meta 0x2bc1657), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:39.070372+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:40.070632+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:41.070913+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 24494080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:42.071117+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068602 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:43.071326+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:44.071525+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x15322f2/0x15f7000, compress 0x0/0x0/0x0, omap 0xea0d, meta 0x2bc15f3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:45.071776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:46.071962+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:47.072146+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x15322f2/0x15f7000, compress 0x0/0x0/0x0, omap 0xea0d, meta 0x2bc15f3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068602 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:48.072315+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:49.072470+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.925289154s of 11.006295204s, submitted: 26
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:50.072621+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba35000/0x0/0x4ffc00000, data 0x15322f2/0x15f7000, compress 0x0/0x0/0x0, omap 0xea0d, meta 0x2bc15f3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 24485888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:51.072771+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:52.073626+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071376 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:53.073765+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:54.073881+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:55.074096+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1533d71/0x15fa000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:56.074250+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:57.074447+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071376 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:58.074664+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:59.074870+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:00.075131+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 24715264 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:01.075338+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:02.075594+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073068 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:03.075882+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:04.076141+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:05.076309+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:06.076461+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:07.076694+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073068 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:08.076925+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:09.077144+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:10.077394+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:11.077586+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:12.077837+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073068 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:13.078023+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:14.078239+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x1533e0c/0x15fb000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:15.078385+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 24707072 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.620378494s of 25.698316574s, submitted: 15
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:16.078533+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 24698880 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:17.078699+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074040 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:18.079025+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:19.079253+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1533ea7/0x15fc000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:20.079483+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:21.079716+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:22.080200+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fba30000/0x0/0x4ffc00000, data 0x1533ea7/0x15fc000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077246 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:23.080428+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:24.080756+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:25.080952+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:26.081147+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:27.081395+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.283190727s of 12.332059860s, submitted: 24
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x1535a11/0x15fe000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076656 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:28.081639+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:29.081804+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:30.081997+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:31.082149+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 24682496 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x1535a11/0x15fe000, compress 0x0/0x0/0x0, omap 0xeabf, meta 0x2bc1541), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:32.082287+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 24666112 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba25000/0x0/0x4ffc00000, data 0x153902a/0x1603000, compress 0x0/0x0/0x0, omap 0xebd5, meta 0x2bc142b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082078 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:33.082434+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:34.082595+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 426 B/s wr, 60 op/s
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:35.082797+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:36.082953+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:37.083160+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 24633344 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.215364456s of 10.467832565s, submitted: 106
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088854 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:38.083307+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba21000/0x0/0x4ffc00000, data 0x153c894/0x1609000, compress 0x0/0x0/0x0, omap 0xec39, meta 0x2bc13c7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 23568384 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:39.083483+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 23568384 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:40.083686+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 23568384 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba20000/0x0/0x4ffc00000, data 0x153c894/0x1609000, compress 0x0/0x0/0x0, omap 0xec39, meta 0x2bc13c7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:41.083824+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 23560192 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba20000/0x0/0x4ffc00000, data 0x153c894/0x1609000, compress 0x0/0x0/0x0, omap 0xec39, meta 0x2bc13c7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:42.084016+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093174 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:43.084215+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:44.084379+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x153ff68/0x160f000, compress 0x0/0x0/0x0, omap 0x111aa, meta 0x2bbee56), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:45.084715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:46.084893+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x153ff68/0x160f000, compress 0x0/0x0/0x0, omap 0x111aa, meta 0x2bbee56), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:47.085092+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 23535616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094866 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:48.085347+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:49.085545+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1540003/0x1610000, compress 0x0/0x0/0x0, omap 0x111aa, meta 0x2bbee56), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:50.085812+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:51.086027+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.778944016s of 13.841563225s, submitted: 39
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:52.086185+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096442 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:53.086402+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:54.086616+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:55.086841+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:56.086984+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:57.087148+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:58.087301+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096442 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:59.087419+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:00.087538+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:01.087676+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 23511040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:02.087861+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:03.088010+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097414 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:04.088162+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:05.088262+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:06.088442+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:07.088639+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:08.088794+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097414 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.695030212s of 16.724678040s, submitted: 14
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:09.088936+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:10.089072+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:11.089173+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:12.089348+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:13.089500+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096696 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541a07/0x1612000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:14.089701+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:15.089854+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 23478272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:16.089996+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:17.090097+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:18.090237+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097414 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:19.090374+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 23470080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:20.090514+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.763314247s of 11.776571274s, submitted: 2
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:21.090713+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:22.090901+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:23.091047+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099106 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541b3d/0x1614000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:24.091212+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541b3d/0x1614000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:25.091387+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:26.091507+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:27.091636+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:28.091803+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099920 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:29.091977+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:30.092203+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba18000/0x0/0x4ffc00000, data 0x1541b3d/0x1614000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:31.092363+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.232583046s of 11.239578247s, submitted: 3
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:32.092831+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:33.093057+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098228 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba19000/0x0/0x4ffc00000, data 0x1541aa2/0x1613000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:34.093272+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:35.093719+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:36.093939+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba14000/0x0/0x4ffc00000, data 0x15436a7/0x1616000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 23437312 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:37.094212+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:38.094616+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101722 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:39.094789+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:40.095117+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:41.095280+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba14000/0x0/0x4ffc00000, data 0x15436a7/0x1616000, compress 0x0/0x0/0x0, omap 0x1132a, meta 0x2bbecd6), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:42.095618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:43.096135+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104496 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:44.096513+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:45.096650+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:46.096820+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba11000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:47.097006+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.116888046s of 16.173322678s, submitted: 36
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:48.097184+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103906 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:49.097357+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154508b/0x1618000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:50.097513+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154508b/0x1618000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 23429120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:51.097658+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 23412736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:52.097798+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 23412736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:53.098050+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104878 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba13000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:54.098291+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:55.098447+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:56.098626+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba13000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:57.098782+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba13000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x11424, meta 0x2bbebdc), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:58.098934+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104878 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:59.099196+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 23396352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x5579540e5400
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.094568253s of 12.111476898s, submitted: 2
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x5579540e4800
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:00.099331+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 23085056 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 14
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:01.099482+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:02.099672+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154523b/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x154523b/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:03.099801+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107264 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:04.099947+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:05.100112+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:06.100235+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:07.100380+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:08.100576+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107264 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:09.100728+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:10.100910+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:11.101075+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 23019520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934212685s of 11.958757401s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:12.101316+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1545126/0x1619000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:13.101520+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107982 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:14.101715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:15.101895+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:16.102081+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:17.102260+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:18.102446+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107982 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:19.102610+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:20.102733+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:21.102922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:22.103151+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:23.103347+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107982 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:24.103640+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 23011328 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.219187737s of 13.221732140s, submitted: 1
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:25.103852+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 22978560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:26.104013+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 22978560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:27.104269+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 22978560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba11000/0x0/0x4ffc00000, data 0x15451c1/0x161a000, compress 0x0/0x0/0x0, omap 0x1159c, meta 0x2bbea64), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:28.104440+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111332 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:29.104588+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:30.104762+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:31.104914+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:32.105253+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:33.105461+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114106 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:34.105618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:35.105860+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:36.106611+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:37.107104+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:38.107746+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114106 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:39.108193+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:40.108989+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:41.110188+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.585231781s of 16.662071228s, submitted: 63
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:42.110414+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:43.111196+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113386 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0c000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:44.111589+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:45.111818+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:46.112039+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:47.112237+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba0c000/0x0/0x4ffc00000, data 0x1548845/0x1620000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 22970368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:48.112605+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116770 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 22937600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:49.112825+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 22937600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba09000/0x0/0x4ffc00000, data 0x1548a16/0x1623000, compress 0x0/0x0/0x0, omap 0x11653, meta 0x2bbe9ad), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:50.113377+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 22913024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:51.113903+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 22904832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:52.114424+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 22904832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:53.114667+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117026 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.542133331s of 11.936735153s, submitted: 6
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:54.115250+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:55.115485+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154a3df/0x1622000, compress 0x0/0x0/0x0, omap 0x11709, meta 0x2bbe8f7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:56.115754+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:57.115993+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:58.116217+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116754 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:59.116422+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154a3df/0x1622000, compress 0x0/0x0/0x0, omap 0x11709, meta 0x2bbe8f7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:00.116625+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 21880832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:01.116859+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20832256 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:02.117106+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 20832256 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:03.117285+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119528 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:04.117521+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:05.117700+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:06.117926+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:07.118119+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:08.118405+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119528 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:09.119407+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.180488586s of 16.281837463s, submitted: 62
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:10.120896+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 20815872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:11.122166+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 ms_handle_reset con 0x5579540e5400 session 0x557953f29a40
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 ms_handle_reset con 0x5579540e4800 session 0x557953edf880
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 20463616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:12.123411+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 20463616 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:13.123594+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 15
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:14.124081+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:15.124829+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:16.125041+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:17.125416+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:18.125866+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:19.126088+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:20.126618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 20406272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:21.126836+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:22.127190+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:23.127366+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:24.127653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:25.127868+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:26.128055+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:27.128401+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:28.128599+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118218 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:29.128845+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:30.129002+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.185983658s of 20.225440979s, submitted: 181
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:31.129137+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:32.129352+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154bf19/0x1626000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:33.129603+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121602 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:34.129759+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 20398080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:35.129965+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:36.130183+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba03000/0x0/0x4ffc00000, data 0x154c0ab/0x1628000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:37.130358+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:38.130653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123980 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:39.130832+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:40.131100+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.403062820s of 10.532799721s, submitted: 14
Jan 21 14:26:44 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 20389888 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:41.131302+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba05000/0x0/0x4ffc00000, data 0x154c00c/0x1627000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 20373504 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:42.131641+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:43.131840+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116309731' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121092 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:44.132010+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:45.132274+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:46.132488+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:47.132633+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:48.132818+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122784 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:49.132968+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:50.133109+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:51.133383+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 20381696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:52.133675+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.927226067s of 11.937694550s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:53.133918+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124476 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:54.134093+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:55.134229+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154bef3/0x1626000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 20348928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:56.134384+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20283392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:57.134532+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20283392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:58.134790+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123630 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 20283392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:59.134960+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 20250624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:00.135183+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 20250624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:01.135410+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 20242432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:02.135636+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:03.135826+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:04.135958+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:05.136060+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:06.136207+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6974 writes, 26K keys, 6974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6974 writes, 1420 syncs, 4.91 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1254 writes, 2803 keys, 1254 commit groups, 1.0 writes per commit group, ingest: 1.29 MB, 0.00 MB/s
                                           Interval WAL: 1254 writes, 494 syncs, 2.54 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:07.136394+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:08.136662+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:09.136815+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:10.137006+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.066503525s of 18.086311340s, submitted: 8
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:11.137246+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 20226048 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beac/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:12.137475+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 20234240 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:13.137674+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 20234240 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc ms_handle_reset ms_handle_reset con 0x5579519be400
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: get_auth_request con 0x5579540bd400 auth_method 0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_configure stats_period=5
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123758 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beac/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:14.137990+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:15.138150+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154beaa/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:16.138396+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:17.138617+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:18.138743+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 19849216 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123758 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:19.138860+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19824640 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beab/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:20.139064+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19824640 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:21.139255+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 19824640 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.091238976s of 11.337059021s, submitted: 10
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:22.139430+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:23.139764+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bea9/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123024 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:24.139994+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:25.140264+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 19816448 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:26.140511+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:27.140698+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:28.140879+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123758 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:29.141021+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 19791872 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beac/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:30.141258+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:31.141419+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154beaa/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:32.141628+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:33.141822+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.114414215s of 11.132976532s, submitted: 9
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122050 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:34.142054+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:35.142322+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:36.142635+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:37.142773+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:38.142963+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154be7e/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:39.143133+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:40.143424+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:41.143692+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:42.143955+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:43.144231+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123024 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:44.144507+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:45.144682+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:46.144827+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 19742720 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bde3/0x1624000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:47.145001+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:48.145129+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123024 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:49.145257+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.400957108s of 16.445089340s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:50.145435+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:51.145571+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:52.145747+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba06000/0x0/0x4ffc00000, data 0x154beab/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:53.145900+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123742 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:54.146025+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:55.146153+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:56.146309+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 19734528 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x154bea9/0x1625000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:57.146433+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 17268736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:58.146639+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 17006592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:59.146773+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137196 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 16990208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.365338326s of 10.434735298s, submitted: 31
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:00.146919+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 16924672 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb9be000/0x0/0x4ffc00000, data 0x1594c4d/0x166e000, compress 0x0/0x0/0x0, omap 0x11889, meta 0x2bbe777), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:01.147045+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 16785408 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:02.147196+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 16646144 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:03.147345+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 16646144 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:04.147462+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135118 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 16318464 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:05.147584+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb987000/0x0/0x4ffc00000, data 0x15ca4f1/0x16a5000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x2bbc2f7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 16695296 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:06.147758+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb960000/0x0/0x4ffc00000, data 0x15f1784/0x16cc000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x2bbc2f7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 16883712 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:07.147896+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb949000/0x0/0x4ffc00000, data 0x1609201/0x16e3000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x2bbc2f7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 14786560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:08.148071+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 14786560 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:09.148283+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139438 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 14540800 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:10.148490+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa746000/0x0/0x4ffc00000, data 0x166b527/0x1746000, compress 0x0/0x0/0x0, omap 0x13d09, meta 0x3d5c2f7), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 14483456 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:11.148625+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.369460106s of 11.168671608s, submitted: 185
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 14483456 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:12.148812+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87203840 unmapped: 14516224 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa741000/0x0/0x4ffc00000, data 0x166cfa6/0x1749000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:13.148944+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 13983744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:14.149156+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153996 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 13959168 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:15.149357+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 13631488 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:16.149513+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 13631488 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa6f3000/0x0/0x4ffc00000, data 0x16bbf5b/0x1799000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:17.149654+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 13434880 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:18.149776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 12378112 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:19.149920+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161404 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 11984896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:20.150073+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa683000/0x0/0x4ffc00000, data 0x172c203/0x1809000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 11984896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa683000/0x0/0x4ffc00000, data 0x172c203/0x1809000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:21.150203+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 11558912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.060370445s of 10.557484627s, submitted: 64
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:22.150445+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90308608 unmapped: 11411456 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:23.150630+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89890816 unmapped: 11829248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:24.150776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163144 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 11558912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:25.150933+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 11558912 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa639000/0x0/0x4ffc00000, data 0x17746d2/0x1853000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:26.151135+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 89268224 unmapped: 12451840 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 16
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x55795244d000
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:27.151283+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 11083776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:28.151428+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90636288 unmapped: 11083776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:29.151667+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170104 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa5c5000/0x0/0x4ffc00000, data 0x17e8156/0x18c7000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 11067392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:30.151855+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 10780672 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:31.151998+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 10928128 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.014771461s of 10.000402451s, submitted: 67
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:32.152250+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 10928128 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:33.152445+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 10772480 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:34.152618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166478 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 10772480 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:35.152802+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa5a2000/0x0/0x4ffc00000, data 0x180bbf0/0x18ea000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90980352 unmapped: 10739712 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:36.152943+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90980352 unmapped: 10739712 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa543000/0x0/0x4ffc00000, data 0x186b0bf/0x1949000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:37.153072+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 90988544 unmapped: 10731520 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:38.153284+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 9183232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:39.153462+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180522 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 8634368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:40.153691+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 9625600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x18c1209/0x199f000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:41.153849+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 9617408 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.556797028s of 10.128489494s, submitted: 52
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:42.154032+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 9453568 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa49a000/0x0/0x4ffc00000, data 0x1913bc6/0x19f2000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:43.154243+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92250112 unmapped: 9469952 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:44.154400+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185870 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92250112 unmapped: 9469952 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:45.154612+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 9142272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:46.154769+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa45e000/0x0/0x4ffc00000, data 0x194f638/0x1a2d000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 9142272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:47.154963+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 9142272 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:48.155165+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 9682944 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:49.155342+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1951365/0x1a32000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183232 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 9674752 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1951365/0x1a32000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:50.155467+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 9674752 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:51.155624+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1951365/0x1a32000, compress 0x0/0x0/0x0, omap 0x13e13, meta 0x3d5c1ed), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:52.155780+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:53.155903+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:54.156038+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185702 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.222537041s of 12.803586960s, submitted: 66
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:55.156163+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:56.156337+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa455000/0x0/0x4ffc00000, data 0x1952ead/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:57.156611+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa455000/0x0/0x4ffc00000, data 0x1952ead/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:58.156743+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92069888 unmapped: 9650176 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:59.156911+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186674 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:00.157143+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952eab/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:01.157295+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952eab/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:02.157539+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:03.157808+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952eab/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 9641984 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:04.157999+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186978 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:05.158169+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.812602997s of 11.090450287s, submitted: 13
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:06.158317+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:07.158532+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:08.158867+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:09.159038+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186260 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:10.159214+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:11.159389+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:12.159585+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:13.159744+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 9633792 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:14.159940+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188750 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa455000/0x0/0x4ffc00000, data 0x1952e50/0x1a36000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 9625600 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:15.160148+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 9617408 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:16.160312+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:17.160491+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.992103577s of 12.011561394s, submitted: 9
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:18.160682+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa456000/0x0/0x4ffc00000, data 0x1952d87/0x1a35000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:19.160839+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188974 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:20.160999+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:21.161377+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:22.161609+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 8577024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:23.161822+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 8577024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:24.161957+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187778 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa459000/0x0/0x4ffc00000, data 0x1952ce7/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 8577024 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa459000/0x0/0x4ffc00000, data 0x1952ce7/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:25.162107+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:26.162284+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:27.162442+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:28.162674+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x1952db0/0x1a34000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.971885681s of 11.010825157s, submitted: 15
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:29.162841+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188162 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:30.163051+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:31.163169+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0x1952c4d/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:32.163338+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:33.163538+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:34.163715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189120 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:35.163848+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:36.163989+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0x1952ce9/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0x1952ce9/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:37.164125+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:38.164264+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:39.164457+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189710 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.989832878s of 11.007907867s, submitted: 9
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:40.164700+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 8568832 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:41.164887+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 ms_handle_reset con 0x55795244d000 session 0x5579542f5c00
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 8257536 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:42.165089+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4e/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93462528 unmapped: 8257536 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 17
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:43.165344+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4e/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:44.165691+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189104 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:45.165865+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:46.166042+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:47.166269+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4c/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [0,2])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:48.166448+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:49.166636+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190078 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:50.166823+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.779275894s of 10.814191818s, submitted: 186
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:51.167017+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952c4d/0x1a32000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:52.167298+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93552640 unmapped: 8167424 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:53.167470+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:54.167755+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189344 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:55.167986+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:56.168218+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:57.168326+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:58.168475+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:59.168672+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189344 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:00.168887+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.192089081s of 10.202366829s, submitted: 3
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:01.169059+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93560832 unmapped: 8159232 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:02.169227+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa45b000/0x0/0x4ffc00000, data 0x1952b86/0x1a31000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:03.169406+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:04.169588+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191274 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:05.169752+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:06.169908+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x19546f0/0x1a33000, compress 0x0/0x0/0x0, omap 0x13f1d, meta 0x3d5c0e3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:07.170057+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93569024 unmapped: 8151040 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:08.170178+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93577216 unmapped: 8142848 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:09.170383+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195036 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 8134656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:10.170573+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 8134656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:11.170828+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa454000/0x0/0x4ffc00000, data 0x19563dd/0x1a37000, compress 0x0/0x0/0x0, omap 0x13fa1, meta 0x3d5c05f), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.824344635s of 10.738523483s, submitted: 53
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93609984 unmapped: 8110080 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:12.171043+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:13.171253+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:14.171400+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197794 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:15.171526+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:16.171786+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa452000/0x0/0x4ffc00000, data 0x1957e5a/0x1a3a000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:17.171935+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:18.172093+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957f23/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:19.172222+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201050 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:20.172469+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957ef6/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:21.172654+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957ef6/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:22.172836+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:23.172962+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.247500420s of 12.277046204s, submitted: 21
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 8093696 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:24.173165+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957ef7/0x1a3b000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200316 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:25.173346+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:26.173743+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:27.174427+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa44f000/0x0/0x4ffc00000, data 0x1957f92/0x1a3c000, compress 0x0/0x0/0x0, omap 0x14027, meta 0x3d5bfd9), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:28.175320+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:29.176289+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203404 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:30.176512+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:31.176888+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa450000/0x0/0x4ffc00000, data 0x1957f90/0x1a3c000, compress 0x0/0x0/0x0, omap 0x14187, meta 0x3d5be79), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 7045120 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:32.177529+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:33.177938+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa44c000/0x0/0x4ffc00000, data 0x1959b98/0x1a3f000, compress 0x0/0x0/0x0, omap 0x1420b, meta 0x3d5bdf5), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:34.178263+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206194 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:35.178409+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:36.178915+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.955485344s of 13.034737587s, submitted: 35
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa44c000/0x0/0x4ffc00000, data 0x1959b98/0x1a3f000, compress 0x0/0x0/0x0, omap 0x1420b, meta 0x3d5bdf5), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:37.179093+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:38.179633+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:39.179954+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa44d000/0x0/0x4ffc00000, data 0x1959b98/0x1a3f000, compress 0x0/0x0/0x0, omap 0x1420b, meta 0x3d5bdf5), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207136 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:40.180071+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 7036928 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:41.180195+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 7028736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:42.180458+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa447000/0x0/0x4ffc00000, data 0x195b66c/0x1a42000, compress 0x0/0x0/0x0, omap 0x1424e, meta 0x3d5bdb2), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 7028736 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:43.180599+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa444000/0x0/0x4ffc00000, data 0x195d30c/0x1a46000, compress 0x0/0x0/0x0, omap 0x142d2, meta 0x3d5bd2e), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94707712 unmapped: 7012352 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:44.180735+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214952 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 7004160 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:45.180900+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 7004160 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:46.181079+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 7004160 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:47.181278+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.878732681s of 10.947863579s, submitted: 47
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 6987776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:48.181622+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa441000/0x0/0x4ffc00000, data 0x195eebc/0x1a49000, compress 0x0/0x0/0x0, omap 0x142d2, meta 0x3d5bd2e), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 6987776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:49.181868+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217006 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 6987776 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:50.182087+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa443000/0x0/0x4ffc00000, data 0x195eeba/0x1a49000, compress 0x0/0x0/0x0, omap 0x142d2, meta 0x3d5bd2e), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 6979584 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:51.182292+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 6979584 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:52.182478+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 6979584 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:53.182644+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 154 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:54.182847+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa440000/0x0/0x4ffc00000, data 0x19607d7/0x1a4a000, compress 0x0/0x0/0x0, omap 0x15ef4, meta 0x3d5a10c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219176 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:55.183040+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:56.183204+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 6971392 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:57.183375+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa440000/0x0/0x4ffc00000, data 0x19607d7/0x1a4a000, compress 0x0/0x0/0x0, omap 0x15ef4, meta 0x3d5a10c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:58.183506+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:59.183680+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221950 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:00.183809+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x19623dc/0x1a4d000, compress 0x0/0x0/0x0, omap 0x161d2, meta 0x3d59e2e), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:01.184004+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 5914624 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:02.184224+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.662895203s of 14.452901840s, submitted: 66
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x19623dc/0x1a4d000, compress 0x0/0x0/0x0, omap 0x161d2, meta 0x3d59e2e), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:03.184343+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:04.184492+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224724 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:05.184629+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1963e5b/0x1a50000, compress 0x0/0x0/0x0, omap 0x164a6, meta 0x3d59b5a), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:06.184760+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:07.184916+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:08.185032+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:09.185184+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226416 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:10.185373+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1963ef6/0x1a51000, compress 0x0/0x0/0x0, omap 0x164a6, meta 0x3d59b5a), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:11.185544+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 5906432 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:12.185810+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.574105263s of 10.472013474s, submitted: 15
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x1965b96/0x1a55000, compress 0x0/0x0/0x0, omap 0x16787, meta 0x3d59879), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 5898240 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:13.185971+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95862784 unmapped: 5857280 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:14.186149+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232158 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95870976 unmapped: 5849088 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:15.186303+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 5840896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:16.186445+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 5840896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:17.186645+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 5840896 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:18.186802+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x19675fa/0x1a55000, compress 0x0/0x0/0x0, omap 0x16a8e, meta 0x3d59572), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:19.186951+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233878 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x19690a5/0x1a58000, compress 0x0/0x0/0x0, omap 0x16b14, meta 0x3d594ec), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:20.187144+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x19690a5/0x1a58000, compress 0x0/0x0/0x0, omap 0x16b14, meta 0x3d594ec), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:21.187306+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:22.187535+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:23.187812+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.637742043s of 10.431279182s, submitted: 62
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:24.187976+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:25.188205+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:26.188378+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:27.188657+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:28.188842+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:29.189055+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:30.189517+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:31.189812+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:32.190051+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:33.190223+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:34.190623+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:35.190944+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:36.191223+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:37.191470+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:38.191690+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:39.191861+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:40.192384+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:41.192795+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:42.193269+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:43.193543+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:44.194009+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:45.194175+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:46.194390+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:47.194606+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:48.194788+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:49.195023+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:50.195262+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:51.195468+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:52.195620+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:53.195769+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:54.196000+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236652 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:55.196187+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:56.196356+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:57.196520+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:58.196776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:59.196926+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.334789276s of 36.344741821s, submitted: 12
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238344 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:00.197142+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:01.197359+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:02.197635+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:03.197763+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:04.197862+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239316 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:05.198038+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:06.198212+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:07.198347+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:08.198518+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ac7a/0x1a5d000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:09.198698+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240864 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:10.198869+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:11.198983+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:12.199175+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196ad15/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:13.199328+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:14.199477+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240864 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:15.199633+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:16.199793+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:17.199945+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196ad15/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:18.200146+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.415119171s of 19.423789978s, submitted: 3
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:19.200340+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240720 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:20.200501+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:21.200669+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42e000/0x0/0x4ffc00000, data 0x196ad15/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:22.200828+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:23.200927+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:24.201113+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239028 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:25.201331+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ac7a/0x1a5d000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:26.201516+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:27.201660+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:28.201813+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.201059341s of 10.209357262s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:29.202051+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:30.202250+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:31.202466+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:32.202765+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:33.202950+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:34.203138+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:35.204421+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:36.204764+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:37.206225+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:38.206881+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:39.207051+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:40.207484+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:41.208168+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:42.209378+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:43.210489+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:44.211462+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:45.212246+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:46.212903+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:47.213524+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:48.213916+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:49.214143+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.711536407s of 20.883802414s, submitted: 1
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:50.214397+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95887360 unmapped: 5832704 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 18
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x5579540e4400
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196ac3a/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: handle_auth_request added challenge on 0x5579540e4c00
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:51.214650+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 5824512 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:52.214831+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95903744 unmapped: 5816320 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:53.214999+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 19
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:54.215196+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196ad4c/0x1a5d000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:55.215371+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:56.215533+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:57.215695+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:58.215922+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:59.216067+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:00.216207+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95920128 unmapped: 5799936 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:01.216383+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:02.216626+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:03.216779+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:04.216931+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:05.217214+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:06.217360+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:07.217597+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:08.217753+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:09.217884+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:10.218038+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:11.218217+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:12.218451+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:13.218664+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:14.218823+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:15.218958+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:16.219103+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:17.219284+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:18.219413+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:19.219649+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:20.219828+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:21.220040+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:22.220206+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:23.220343+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:24.220519+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:25.220682+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:26.220914+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:27.221157+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:28.221359+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:29.221528+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:30.221810+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:31.222044+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:32.222259+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:33.222394+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:34.222678+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:35.222836+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:36.223005+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:37.223159+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:38.223287+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:39.223484+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:40.223834+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:41.224012+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:42.224242+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95928320 unmapped: 5791744 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:43.224424+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 20
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:44.224584+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:45.224723+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:46.224852+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:47.224991+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:48.225135+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:49.225284+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 21
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:50.225428+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:51.225616+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:52.225837+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:53.225987+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:54.226107+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:55.226480+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:56.226686+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 5783552 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:57.226919+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 67.852340698s of 67.861358643s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:58.227105+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:59.227263+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:00.227454+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:01.227639+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:02.227799+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:03.227961+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:04.228135+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:05.228315+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:06.228475+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:07.228631+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:08.228880+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:09.229028+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:10.229198+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:11.229340+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 5775360 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.169133186s of 14.178491592s, submitted: 4
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 ms_handle_reset con 0x5579540e4400 session 0x557954300700
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 ms_handle_reset con 0x5579540e4c00 session 0x5579542ecfc0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:12.229603+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 5562368 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:13.229752+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 22
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:14.229869+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:15.230029+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:16.230191+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:17.230325+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:18.230485+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:19.230651+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:20.230808+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239540 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:21.230944+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:22.231146+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:23.231239+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x196ab44/0x1a5b000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:24.231368+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:25.231480+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238822 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.828279495s of 13.854784966s, submitted: 181
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:26.231617+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95969280 unmapped: 5750784 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:27.231769+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:28.231873+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:29.232035+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:30.232115+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240514 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:31.232261+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x196abdf/0x1a5c000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:32.232455+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x196c749/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:33.232641+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:34.232814+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:35.232995+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242300 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:36.233181+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:37.233318+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:38.233462+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.358694077s of 12.426469803s, submitted: 27
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x196c749/0x1a5e000, compress 0x0/0x0/0x0, omap 0x16cb4, meta 0x3d5934c), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:39.233664+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:40.233841+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:41.233972+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:42.234178+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:43.234362+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:44.234521+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:45.234721+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:46.234910+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:47.235087+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:48.235312+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:49.235541+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:50.235822+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:51.236023+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:52.236265+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:53.236450+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:54.236642+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:55.236843+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:56.237015+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:57.237160+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:58.237339+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:59.237612+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:00.237813+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:01.237979+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:02.238222+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:03.238379+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:04.238631+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:05.238836+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:06.239030+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:07.239231+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:08.239369+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:09.239520+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:10.239679+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:11.239817+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:12.239975+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:13.241325+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:14.241476+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:15.241650+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245074 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:16.241843+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:17.241995+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.632019043s of 39.640956879s, submitted: 54
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:18.242150+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:19.242433+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa428000/0x0/0x4ffc00000, data 0x196e263/0x1a62000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa428000/0x0/0x4ffc00000, data 0x196e263/0x1a62000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:20.242600+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246766 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:21.242833+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa428000/0x0/0x4ffc00000, data 0x196e263/0x1a62000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:22.243009+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:23.243159+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:24.243290+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:25.243421+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245328 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:26.243650+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0x196e1c8/0x1a61000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 163 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:27.243813+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:28.243997+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:29.244132+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:30.244298+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x196fdcd/0x1a64000, compress 0x0/0x0/0x0, omap 0x16dce, meta 0x3d59232), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247848 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:31.244473+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:32.244697+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:33.244888+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:34.245051+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.221067429s of 16.275087357s, submitted: 25
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:35.245246+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:36.245476+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:37.245654+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:38.245790+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:39.245949+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:40.246095+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:41.246250+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:42.246460+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:43.246626+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:44.246807+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:45.246977+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:46.247160+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:47.247349+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:48.247540+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:49.247717+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:50.247860+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:51.248048+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:52.248304+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:53.248515+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:54.248717+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:55.248878+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:56.249081+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:57.249296+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:58.249457+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:59.249654+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:00.249919+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:01.250089+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:02.250327+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:03.250618+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:04.250802+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:05.250956+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:06.251114+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:07.251270+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:08.251434+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:09.251648+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:10.251815+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250622 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:11.252011+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:12.252236+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96034816 unmapped: 5685248 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa423000/0x0/0x4ffc00000, data 0x197184c/0x1a67000, compress 0x0/0x0/0x0, omap 0x16e2d, meta 0x3d591d3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.478794098s of 38.538433075s, submitted: 13
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:13.252352+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:14.258502+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:15.258656+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1973451/0x1a6a000, compress 0x0/0x0/0x0, omap 0x16ec1, meta 0x3d5913f), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253396 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:16.258787+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:17.258931+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1973451/0x1a6a000, compress 0x0/0x0/0x0, omap 0x16ec1, meta 0x3d5913f), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:18.259068+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:19.259211+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:20.259362+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa41f000/0x0/0x4ffc00000, data 0x19734ec/0x1a6b000, compress 0x0/0x0/0x0, omap 0x16ec1, meta 0x3d5913f), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255088 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:21.259522+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _renew_subs
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:22.259715+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:23.259867+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.299714088s of 10.686847687s, submitted: 37
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:24.260075+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:25.260237+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257144 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa41c000/0x0/0x4ffc00000, data 0x1974ed0/0x1a6d000, compress 0x0/0x0/0x0, omap 0x16f45, meta 0x3d590bb), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:26.260375+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:27.260517+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:28.260647+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:29.260830+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:30.261113+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa41a000/0x0/0x4ffc00000, data 0x1976ad5/0x1a70000, compress 0x0/0x0/0x0, omap 0x16fd9, meta 0x3d59027), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258944 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:31.261246+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:32.261417+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:33.261620+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:34.261812+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.059199333s of 11.116022110s, submitted: 38
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:35.261984+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 5742592 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:36.262158+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:37.262375+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:38.262518+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:39.262649+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:40.262776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:41.262994+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:42.263215+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:43.263388+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:44.263540+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:45.263773+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:46.263970+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:47.264137+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:48.264278+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:49.264428+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:50.264617+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261718 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:51.264802+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:52.264999+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:53.265145+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.993268967s of 19.002677917s, submitted: 13
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x1978554/0x1a73000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:54.265322+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:55.265460+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264492 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:56.265611+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:57.265807+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:58.266064+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 5734400 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fa414000/0x0/0x4ffc00000, data 0x197a159/0x1a76000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:59.266191+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:00.266341+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264492 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:01.266541+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:02.266800+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:03.266907+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:04.267040+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 5726208 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.663864136s of 11.002883911s, submitted: 24
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa414000/0x0/0x4ffc00000, data 0x197a159/0x1a76000, compress 0x0/0x0/0x0, omap 0x1705d, meta 0x3d58fa3), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:05.267192+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:06.267402+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 9322 writes, 33K keys, 9322 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9322 writes, 2294 syncs, 4.06 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2348 writes, 6511 keys, 2348 commit groups, 1.0 writes per commit group, ingest: 6.88 MB, 0.01 MB/s
                                           Interval WAL: 2348 writes, 874 syncs, 2.69 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:07.267617+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:08.267776+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:09.267942+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:10.268102+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:11.268231+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:12.268454+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:13.268620+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:14.268766+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:15.268904+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:16.269072+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:17.269203+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:18.269359+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:19.269501+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:20.269649+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:21.269787+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:22.269952+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:23.270096+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:24.270283+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:25.270401+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:26.270536+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:27.270622+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:28.270811+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:29.270906+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:30.271032+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 5718016 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:31.271117+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:32.271377+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:33.271507+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:34.271691+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:35.271830+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:36.271994+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:37.272188+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:38.272347+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:39.272487+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:40.272636+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:41.272809+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:42.272979+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:43.273127+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:44.273294+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:45.273455+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:46.273658+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:47.273833+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:48.273989+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:49.274144+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:50.274368+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 5709824 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:51.274524+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:52.274760+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:53.274897+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:54.275074+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:55.275224+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:56.275357+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267266 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:57.275653+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:58.275792+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:59.275959+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 5701632 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 55.488452911s of 55.496517181s, submitted: 12
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:00.276106+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 5693440 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:01.276415+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266618 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 4636672 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:02.276647+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 4620288 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:03.276880+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 4620288 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:04.277094+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 4620288 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:05.277247+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:06.277516+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266546 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:07.277663+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:08.277795+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:09.277918+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x197bbd8/0x1a79000, compress 0x0/0x0/0x0, omap 0x17175, meta 0x3d58e8b), peers [0,1] op hist [])
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:10.278042+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97107968 unmapped: 4612096 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:11.278186+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:44 compute-0 ceph-osd[87843]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266546 data_alloc: 218103808 data_used: 5778
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 4562944 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'config diff' '{prefix=config diff}'
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'config show' '{prefix=config show}'
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'counter dump' '{prefix=counter dump}'
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:12.278353+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'counter schema' '{prefix=counter schema}'
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 21 14:26:44 compute-0 ceph-osd[87843]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.076944351s of 12.270958900s, submitted: 90
Jan 21 14:26:44 compute-0 ceph-osd[87843]: osd.2 171 ms_handle_reset con 0x5579543f6400 session 0x5579539b9c00
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 4038656 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Got map version 23
Jan 21 14:26:44 compute-0 ceph-osd[87843]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:13.278488+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 3809280 heap: 101720064 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: tick
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_tickets
Jan 21 14:26:44 compute-0 ceph-osd[87843]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:14.278617+0000)
Jan 21 14:26:44 compute-0 ceph-osd[87843]: do_command 'log dump' '{prefix=log dump}'
Jan 21 14:26:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 14:26:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 21 14:26:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009789553' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 21 14:26:45 compute-0 ceph-mon[75031]: from='client.14609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:45 compute-0 ceph-mon[75031]: pgmap v1424: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 426 B/s wr, 60 op/s
Jan 21 14:26:45 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4116309731' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 21 14:26:45 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 14:26:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:45 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 21 14:26:45 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510436890' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 21 14:26:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3009789553' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3510436890' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 21 14:26:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097060634' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 21 14:26:46 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:46 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:46 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 21 14:26:46 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951054109' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 21 14:26:47 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:47 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 21 14:26:47 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358713181' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 21 14:26:47 compute-0 ceph-mon[75031]: from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1097060634' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 21 14:26:47 compute-0 ceph-mon[75031]: pgmap v1425: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:47 compute-0 ceph-mon[75031]: from='client.14624 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:47 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2951054109' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 21 14:26:47 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14632 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:48 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 21 14:26:48 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102807065' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 21 14:26:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14636 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:48 compute-0 crontab[259457]: (root) LIST (root)
Jan 21 14:26:48 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:48 compute-0 ceph-mon[75031]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:48 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2358713181' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 21 14:26:48 compute-0 ceph-mon[75031]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:48 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2102807065' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 21 14:26:48 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14640 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 21 14:26:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482148430' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 21 14:26:49 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14644 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:49 compute-0 ceph-mgr[75322]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 14:26:49 compute-0 ceph-2f0e9cad-f0a3-5869-9cc3-8d84d071866a-mgr-compute-0-tnwklj[75318]: 2026-01-21T14:26:49.303+0000 7fc546f36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:23.744917+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 1556480 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:24.745169+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 1556480 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:25.745376+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 1556480 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:26.745710+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 1548288 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:27.745903+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 1548288 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:28.746137+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 1540096 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:29.746350+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 1540096 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:30.746584+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 1531904 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:31.746757+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 1531904 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:32.746943+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 1531904 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:33.747217+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 1523712 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:34.747545+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 1523712 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:35.747810+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 1515520 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:36.748040+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 1515520 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:37.748245+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 1507328 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:38.748503+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 1507328 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:39.748782+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 1499136 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:40.748968+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1490944 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:41.749218+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 1490944 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:42.749399+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1482752 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:43.749538+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1482752 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:44.749721+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 1482752 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:45.749899+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 1474560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:46.750107+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 1474560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:47.750288+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1466368 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:48.750526+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 1466368 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:49.750750+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:50.750971+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:51.751276+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:52.751507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 1458176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:53.751719+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1449984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:54.751988+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 1449984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:55.752819+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 1441792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:56.753198+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 1441792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:57.753616+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1433600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:58.754131+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1433600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:59.754394+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 1433600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:00.754655+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 1425408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:01.754923+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 1425408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:02.755168+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1417216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:03.755405+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1417216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:04.755650+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 1417216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:05.755831+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1409024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:06.756145+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 1409024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:07.756404+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 1400832 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:08.756687+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1392640 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:09.756943+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 1392640 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:10.757328+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1376256 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:11.757526+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 1376256 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:12.757815+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 1368064 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:13.757990+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 1368064 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:14.758210+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1359872 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:15.758438+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1359872 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:16.759625+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 1359872 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:17.760085+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 1351680 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:18.760323+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:19.760532+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 1351680 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:20.761855+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1343488 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:21.762002+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 1343488 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:22.762213+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:23.762486+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:24.762753+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:25.763051+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 1335296 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:26.763704+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 1327104 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:27.763843+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 1327104 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:28.764288+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 1318912 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:29.764507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 1310720 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:30.771247+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 1302528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:31.771611+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 1302528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:32.771827+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 1302528 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:33.771947+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1294336 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:34.772208+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 1294336 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:35.772349+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 1286144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:36.772546+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 1286144 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:37.772931+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1277952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:38.773102+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1277952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:39.773305+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 1277952 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:40.773476+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 1269760 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:41.773627+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 1269760 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:42.773755+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 1261568 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:43.774010+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 1261568 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:44.774184+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 1253376 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:45.774307+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 1253376 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:46.774506+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 1245184 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:47.774713+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 1236992 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:48.774832+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 1236992 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:49.774950+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1228800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:50.775058+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1228800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:51.775175+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 1228800 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:52.775366+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1220608 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:53.775493+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 1220608 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:54.775674+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1212416 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:55.775808+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 1212416 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:56.775952+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1204224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:57.776077+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1204224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:58.776211+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 1204224 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:59.776968+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1196032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:00.777890+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1196032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:01.778049+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 1196032 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6984 writes, 28K keys, 6984 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s
                                           Interval WAL: 6984 writes, 1319 syncs, 5.29 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:02.778195+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1114112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:03.778320+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1114112 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:04.778476+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:05.778637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1105920 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:06.778832+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1097728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:07.779005+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1097728 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:08.779150+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:09.779298+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:10.779444+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1089536 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:11.779620+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1073152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:12.779768+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 1073152 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:13.779968+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:14.780135+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:15.780260+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 1064960 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:16.780610+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:17.780770+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 1048576 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:18.780950+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:19.781115+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:20.781244+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 1040384 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:21.781401+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:22.781546+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 1032192 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:23.781694+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:24.781833+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:25.782005+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 1024000 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:26.782192+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:27.782336+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:28.782457+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:29.782651+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:30.782801+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 1007616 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:31.782954+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:32.783103+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 999424 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:33.783215+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:34.783393+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:35.783597+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:36.783784+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:37.783921+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:38.784122+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:39.784293+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:40.784421+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:41.784550+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:42.784734+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:43.784889+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:44.785044+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:45.785191+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:46.785375+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:47.785523+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:48.785607+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:49.785801+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:50.785979+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:51.786213+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:52.786374+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:53.786678+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:54.786889+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:55.787300+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:56.787542+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:57.787695+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:58.787902+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:59.788167+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 255.080078125s of 255.664642334s, submitted: 8
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:00.788356+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 1015808 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:01.788687+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:02.788917+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:03.789205+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:04.789394+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:05.789519+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:06.789677+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:07.789812+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:08.789988+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 991232 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:09.790202+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:10.790359+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 983040 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:11.790491+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:12.790637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 974848 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:13.791533+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:14.791786+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 966656 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:15.791906+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:16.792096+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:17.792256+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 958464 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:18.792397+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:19.792602+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:20.792738+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:21.792896+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 950272 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:22.793093+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:23.793238+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 942080 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:24.793425+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:25.793620+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 933888 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:26.793825+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 925696 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:27.794061+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:28.794231+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:29.794409+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 917504 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:30.794625+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:31.794782+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 909312 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:32.794977+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:33.795124+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 901120 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:34.795308+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 892928 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:35.795474+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 892928 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:36.796036+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 884736 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:37.796190+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:38.796399+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 876544 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:39.796636+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 868352 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:40.796821+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 868352 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:41.797018+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 868352 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:42.797187+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 860160 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:43.797342+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 860160 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:44.797520+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:45.797708+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 851968 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:46.797933+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:47.798067+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:48.798255+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 843776 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:49.798392+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 835584 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:50.798545+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 835584 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:51.798687+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 835584 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:52.798829+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:53.798952+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 827392 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:54.799086+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:55.799243+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 819200 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:56.799474+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 811008 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:57.799613+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 811008 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:58.799737+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 811008 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:59.799878+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 802816 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:00.800056+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 802816 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:01.800206+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 802816 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:02.800380+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:03.800542+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:04.800703+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:05.800816+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:06.801017+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:07.801153+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:08.801366+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:09.801621+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:10.801837+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:11.801965+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:12.802151+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:13.802435+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:14.802624+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:15.802869+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:16.803092+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:17.803259+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:18.803401+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:19.803538+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 794624 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:20.803701+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:21.803854+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:22.803998+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:23.804134+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:24.804320+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:25.804540+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:26.804833+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:27.805038+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:28.805189+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:29.805400+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:30.805736+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:31.805876+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:32.806066+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:33.806254+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:34.806386+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:35.806613+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:36.806825+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:37.806962+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:38.807120+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:39.807304+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:40.807512+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:41.807666+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:42.807811+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:43.807936+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:44.808071+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:45.808251+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 786432 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:46.808467+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:47.808640+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:48.808767+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:49.810918+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:50.811070+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:51.811220+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:52.811361+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:53.811497+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:54.811637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:55.811788+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:56.811944+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:57.812111+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:58.812302+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:59.812546+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:00.812725+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:01.812882+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:02.813016+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:03.813170+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:04.813333+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:05.813467+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:06.813935+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:07.814048+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:08.814336+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:09.814533+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 778240 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:10.814745+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:11.814910+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:12.815112+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:13.815286+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:14.815483+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:15.815671+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:16.815944+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:17.816103+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:18.816250+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 770048 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:19.816390+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:20.816630+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:21.816758+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:22.816895+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:23.817020+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:24.817128+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:25.817244+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:26.817700+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:27.817852+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:28.817991+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:29.818142+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:30.818311+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:31.818465+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:32.818706+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:33.818851+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:34.818987+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:35.819225+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:36.819427+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:37.819639+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 761856 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:38.819823+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:39.820384+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:40.820624+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:41.821080+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:42.821221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:43.821387+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:44.821615+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:45.821767+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:46.821934+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:47.822119+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:48.822297+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:49.822488+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:50.822699+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:51.822825+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:52.822987+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:53.823160+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:54.823346+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:55.823494+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:56.823785+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:57.823917+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:58.824065+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:59.824221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:00.824400+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:01.824576+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:02.824767+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 753664 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:03.824950+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:04.825126+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:05.825339+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:06.825507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:07.825702+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:08.826019+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:09.826233+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:10.826385+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:11.826579+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:12.826703+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:13.826882+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:14.827024+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:15.827163+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:16.827326+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:17.827431+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:18.827589+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:19.827740+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:20.827975+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:21.828125+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:22.828312+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:23.828441+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:24.828622+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:25.828780+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 745472 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:26.828968+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:27.829124+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:28.829253+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:29.829398+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:30.829637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 737280 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:31.829776+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:32.829938+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:33.830178+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:34.830321+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:35.830633+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:36.830823+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:37.830960+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:38.831103+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:39.831386+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:40.831622+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:41.831830+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:42.832136+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:43.832406+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:44.832649+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:45.832789+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:46.832962+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:47.833145+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:48.833274+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:49.833434+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:50.833623+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:51.833803+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:52.834030+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:53.834519+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:54.834718+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 729088 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:55.834996+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:56.835373+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:57.835523+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:58.835643+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:59.835818+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:00.836096+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:01.836369+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:02.836686+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:03.836967+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:04.837203+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:05.837610+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:06.837884+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:07.838061+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:08.838217+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 720896 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc ms_handle_reset ms_handle_reset con 0x562353502000
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: get_auth_request con 0x562353f25000 auth_method 0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_configure stats_period=5
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:09.838364+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 ms_handle_reset con 0x562353ab1000 session 0x562353b12e00
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562353fe7000
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:10.838576+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:11.838807+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:12.839006+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:13.839180+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:14.839352+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:15.839473+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:16.839620+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:17.839825+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:18.840000+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:19.840170+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:20.840326+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:21.840500+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:22.840687+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:23.840884+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:24.841155+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:25.841300+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:26.841507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:27.841639+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:28.841797+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:29.841917+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:30.842045+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:31.842192+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:32.842351+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:33.842481+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:34.842683+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:35.842872+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:36.843027+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:37.843185+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:38.843334+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:39.843494+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:40.843653+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:41.843823+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:42.844120+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:43.844248+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:44.844350+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:45.844502+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:46.844688+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:47.844819+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:48.844936+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:49.845074+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:50.845203+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:51.845323+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:52.845448+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:53.845586+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:54.845745+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:55.845860+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:56.846020+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:57.846179+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:58.846359+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 450560 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 298.569305420s of 299.268676758s, submitted: 90
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:59.846504+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:00.846648+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:01.846812+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:02.846949+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:03.847120+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:04.847239+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:05.847385+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:06.847613+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:07.847772+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:08.847973+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:09.848144+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:10.848299+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:11.848476+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:12.848614+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:13.848738+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:14.848877+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:15.849039+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:16.849267+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:17.849440+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:18.849595+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:19.849713+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:20.849839+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:21.849964+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:22.850143+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:23.850290+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:24.850435+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:25.850589+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:26.850778+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:27.850919+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:28.851062+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:29.851206+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:30.851333+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:31.851493+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:32.851648+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:33.851822+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:34.852014+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:35.852176+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:36.852389+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:37.852594+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:38.852749+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:39.852885+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:40.853030+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:41.853174+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:42.853314+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:43.853463+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:44.853620+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:45.853758+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:46.853917+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:47.854106+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:48.854245+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:49.854404+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:50.854598+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:51.854754+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:52.854930+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:53.855078+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:54.855213+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:55.855332+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:56.855490+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:57.855669+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:58.855865+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:59.855991+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:00.856139+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:01.856278+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:02.856534+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:03.856714+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:04.856877+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:05.856999+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:06.857195+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:07.857340+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:08.857503+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:09.857650+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 434176 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:10.857877+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:11.858034+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:12.858169+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:13.858333+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:14.858486+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:15.858660+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:16.858836+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:17.859057+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:18.859214+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:19.859341+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:20.859493+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:21.859704+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:22.859905+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:23.860068+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:24.860251+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:25.860426+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:26.860642+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:27.860779+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:28.860898+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:29.861027+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:30.861150+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:31.861354+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:32.861446+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:33.861616+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:34.861819+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 425984 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:35.861982+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:36.862124+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:37.862242+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:38.862367+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:39.862529+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:40.862700+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:41.862868+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:42.863092+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:43.863289+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:44.863473+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:45.863615+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:46.863804+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:47.864168+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:48.864305+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:49.864470+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:50.864594+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:51.864733+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:52.864877+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:53.865040+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:54.865177+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:55.865322+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:56.865495+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:57.865613+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:58.865734+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:59.865860+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:00.866043+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:01.866184+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:02.866345+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:03.866515+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:04.866657+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:05.866833+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:06.867037+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 417792 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:07.867207+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:08.867344+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:09.867469+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:10.867636+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 409600 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:11.867786+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:12.867949+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:13.868067+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:14.868243+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:15.868484+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:16.868771+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:17.868916+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:18.869078+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:19.869223+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:20.869495+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:21.869639+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:22.869848+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:23.870024+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:24.870102+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:25.870192+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:26.870346+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:27.870511+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:28.870668+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:29.870843+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:30.871026+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 401408 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:31.871154+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:32.871318+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:33.871472+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:34.871612+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:35.871781+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:36.871976+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:37.872108+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:38.872286+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:39.872458+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:40.872681+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:41.872820+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:42.872970+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:43.873134+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:44.873302+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:45.873444+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:46.873656+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:47.873886+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:48.874063+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:49.874240+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:50.874385+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:51.874677+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:52.874838+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:53.874998+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:54.875144+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:55.875348+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:56.875625+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:57.875778+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:58.875936+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:59.876132+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:00.876269+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:01.876438+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:02.876652+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:03.876876+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:04.877082+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:05.877225+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:06.877366+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:07.877486+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread fragmentation_score=0.000121 took=0.000016s
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:08.877679+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:09.877832+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:10.877983+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:11.878123+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:12.878299+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:13.878445+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:14.878627+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:15.878948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:16.879134+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:17.879362+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:18.879502+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:19.879634+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:20.879813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:21.879947+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:22.880164+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:23.880319+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:24.880492+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:25.880674+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:26.880908+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:27.881122+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:28.881294+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:29.881491+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:30.881697+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:31.881829+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:32.881993+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:33.882120+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:34.882288+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:35.882486+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:36.882703+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:37.882866+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:38.882992+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:39.883150+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:40.883337+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:41.883493+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:42.883655+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:43.883835+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:44.883991+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:45.884171+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:46.884350+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:47.884471+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:48.884623+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:49.884814+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:50.884961+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:51.885122+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:52.885278+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:53.885422+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:54.885615+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:55.885819+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:56.886021+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:57.886153+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:58.886284+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:59.886480+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:00.886602+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:01.886827+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7208 writes, 29K keys, 7208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7208 writes, 1431 syncs, 5.04 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5623517d38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:02.886987+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:03.887128+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:04.887336+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:05.887528+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:06.887857+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:07.888028+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:08.888154+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:09.888333+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:10.888519+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:11.888651+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:12.888851+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:13.889042+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:14.889245+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:15.889392+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:16.889638+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:17.889790+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:18.889965+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:19.890137+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:20.890264+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:21.890404+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:22.890720+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:23.890872+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:24.891052+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:25.891225+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:26.891509+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:27.891637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:28.891800+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:29.891993+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:30.892182+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 393216 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:31.892319+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:32.892508+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:33.892694+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:34.892871+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:35.893034+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:36.893211+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:37.893381+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:38.893503+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:39.893613+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:40.893750+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:41.893916+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:42.894088+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:43.894273+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:44.894464+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:45.894609+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:46.894787+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:47.894941+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:48.895102+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:49.895220+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:50.895341+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:51.895535+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:52.895761+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:53.895918+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:54.896072+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:55.896265+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:56.896465+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:57.896687+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:58.896875+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 385024 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:59.897059+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.002990723s of 300.343963623s, submitted: 22
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 368640 heap: 84926464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:00.897189+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:01.897339+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:02.897502+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:03.897655+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:04.897798+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:05.898164+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:06.898368+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:07.898540+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:08.898797+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:09.898971+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1417216 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:10.899146+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:11.899291+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:12.899510+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:13.899675+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:14.899857+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:15.900005+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:16.900223+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:17.900437+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:18.900590+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:19.900709+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:20.900951+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:21.901101+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:22.901248+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:23.901392+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:24.901735+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:25.901935+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:26.902129+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:27.902253+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:28.902392+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:29.902542+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:30.902756+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:31.902967+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:32.903192+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:33.903353+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:34.903509+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:35.903693+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:36.903929+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:37.904105+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:38.904315+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:39.904455+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:40.904649+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:41.904854+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:42.905042+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:43.905232+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:44.905401+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:45.905618+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:46.905847+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:47.906004+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:48.906193+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:49.906403+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:50.906533+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:51.906690+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:52.906830+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:53.907014+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:54.907145+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:55.907325+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:56.907545+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:57.907795+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:58.907982+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:59.908154+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:00.908348+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:01.908494+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:02.908690+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:03.908871+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:04.909038+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:05.909182+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:06.909389+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:07.909526+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:08.909680+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:09.909813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:10.909989+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:11.910186+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:12.910341+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:13.910486+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:14.910664+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:15.910810+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:16.911003+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:17.911185+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:18.911356+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:19.911521+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:20.911693+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:21.911886+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:22.912031+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:23.912177+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:24.912338+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:25.912528+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:26.912811+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:27.912964+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:28.913105+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:29.913286+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:30.913468+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:31.913624+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:32.913745+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:33.913889+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:34.914017+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:35.914127+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:36.914509+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:37.914689+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:38.914892+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:39.915075+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:40.915255+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:41.915393+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:42.915521+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:43.915723+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:44.915857+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:45.916034+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:46.916214+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:47.916367+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:48.916510+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:49.916710+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:50.916864+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:51.917000+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:52.917144+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:53.917252+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:54.917396+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:55.917544+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:56.917718+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:57.917869+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:58.918019+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:59.918182+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:00.918361+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:01.918619+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:02.918759+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:03.918912+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:04.919092+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:05.919288+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:06.919519+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:07.919698+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:08.919887+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:09.920095+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:10.920402+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:11.920548+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:12.920713+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:13.920847+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:14.920974+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:15.921109+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:16.921287+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:17.921414+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1400832 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:18.921540+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:19.921659+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:20.921776+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:21.921915+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:22.922021+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:23.922165+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:24.922295+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:25.922447+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:26.922682+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:27.922853+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:28.922969+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:29.923093+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:30.923253+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:31.923386+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:32.923524+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:33.923680+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:34.923829+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:35.923957+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:36.924116+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:37.924237+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:38.924370+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:39.924523+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:40.924679+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:41.924820+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:42.924983+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:43.925135+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:44.925272+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1392640 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:45.925446+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:46.925714+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:47.925969+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:48.926123+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:49.926324+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:50.926528+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:51.926717+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:52.926880+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:53.927053+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:54.927231+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:55.927376+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:56.927602+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:57.927799+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:58.927958+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:59.928143+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:00.928306+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:01.928512+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:02.928703+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:03.928934+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:04.929121+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:05.929292+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:06.929617+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:07.929828+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:08.930021+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:09.930164+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1376256 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:10.930349+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:11.930542+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:12.930768+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:13.930962+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:14.931122+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:15.931402+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:16.931614+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1409024 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:17.931787+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999644 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562354024400
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1384448 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 198.082351685s of 198.465316772s, submitted: 90
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fce45000/0x0/0x4ffc00000, data 0x1283ab/0x1e7000, compress 0x0/0x0/0x0, omap 0x13441, meta 0x2bbcbbf), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:18.931943+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1376256 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:19.932136+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 12238848 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 125 ms_handle_reset con 0x562354024400 session 0x562353f58c40
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:20.932281+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 16834560 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:21.932492+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 16834560 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:22.932761+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078789 data_alloc: 218103808 data_used: 12429
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c7000/0x0/0x4ffc00000, data 0xd9d735/0xe63000, compress 0x0/0x0/0x0, omap 0x13bb8, meta 0x2bbc448), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 16834560 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:23.932922+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 16834560 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:24.933110+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 16834560 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c7000/0x0/0x4ffc00000, data 0xd9d735/0xe63000, compress 0x0/0x0/0x0, omap 0x13bb8, meta 0x2bbc448), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:25.933258+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562354025800
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 25157632 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:26.933642+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 25100288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 126 ms_handle_reset con 0x562354025800 session 0x5623558da8c0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:27.933855+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128095 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 25100288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:28.934024+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 25100288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:29.934250+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 25100288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:30.934485+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915373802s of 12.542260170s, submitted: 57
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9c2000/0x0/0x4ffc00000, data 0x159f320/0x1668000, compress 0x0/0x0/0x0, omap 0x141c8, meta 0x2bbbe38), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 25100288 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:31.934765+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:32.935022+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130629 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:33.935279+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:34.935507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:35.935708+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:36.935919+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:37.936144+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130629 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:38.936335+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:39.936471+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:40.936663+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:41.936860+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:42.936984+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130629 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:43.937112+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:44.937286+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:45.937463+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:46.938102+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:47.938246+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130629 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:48.938410+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:49.938582+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bf000/0x0/0x4ffc00000, data 0x15a0d9f/0x166b000, compress 0x0/0x0/0x0, omap 0x144ae, meta 0x2bbbb52), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:50.938718+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 25108480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562354025c00
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.586994171s of 20.593721390s, submitted: 13
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:51.938842+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 24977408 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:52.938963+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131257 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 12
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23666688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9bc000/0x0/0x4ffc00000, data 0x15a64c2/0x1670000, compress 0x0/0x0/0x0, omap 0x14770, meta 0x2bbb890), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:53.939081+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x5623559f0c00
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 87818240 unmapped: 23339008 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9b0000/0x0/0x4ffc00000, data 0x15b1974/0x167c000, compress 0x0/0x0/0x0, omap 0x14a09, meta 0x2bbb5f7), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:54.939178+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 87818240 unmapped: 23339008 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:55.939326+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9aa000/0x0/0x4ffc00000, data 0x15b764f/0x1682000, compress 0x0/0x0/0x0, omap 0x14cb0, meta 0x2bbb350), peers [0,2] op hist [0,0,0,0,0,0,1,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 22126592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:56.939524+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 22052864 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:57.939711+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135919 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 22052864 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:58.939894+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 22142976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:59.940105+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 22126592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9a2000/0x0/0x4ffc00000, data 0x15bf786/0x168a000, compress 0x0/0x0/0x0, omap 0x15330, meta 0x2bbacd0), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 13
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:00.940219+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 22069248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.632277489s of 10.206971169s, submitted: 46
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:01.940384+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89112576 unmapped: 22044672 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:02.940526+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137139 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 21921792 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:03.940670+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 21905408 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:04.940827+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 21905408 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:05.940940+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 21905408 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb990000/0x0/0x4ffc00000, data 0x15d1075/0x169c000, compress 0x0/0x0/0x0, omap 0x15992, meta 0x2bba66e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:06.941346+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 21856256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:07.941548+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136695 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 21856256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:08.941731+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 21856256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:09.941830+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 21815296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:10.941984+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 21807104 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.286722660s of 10.000367165s, submitted: 38
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:11.942161+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 21807104 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb989000/0x0/0x4ffc00000, data 0x15d84fb/0x16a3000, compress 0x0/0x0/0x0, omap 0x15ccf, meta 0x2bba331), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:12.942385+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138023 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 21708800 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:13.942499+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 21708800 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:14.942622+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 21708800 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:15.942792+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 21856256 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:16.943061+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89325568 unmapped: 21831680 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:17.943259+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138631 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb97b000/0x0/0x4ffc00000, data 0x15e648c/0x16b1000, compress 0x0/0x0/0x0, omap 0x161aa, meta 0x2bb9e56), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 21872640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:18.943441+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 21872640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:19.943617+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 21872640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:20.943769+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 21872640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:21.944332+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.468436241s of 10.332838058s, submitted: 27
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89292800 unmapped: 21864448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:22.944978+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138103 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89292800 unmapped: 21864448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb979000/0x0/0x4ffc00000, data 0x15e836f/0x16b3000, compress 0x0/0x0/0x0, omap 0x162ac, meta 0x2bb9d54), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:23.945289+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89292800 unmapped: 21864448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:24.945745+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89292800 unmapped: 21864448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:25.945880+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 21823488 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb974000/0x0/0x4ffc00000, data 0x15ece5e/0x16b8000, compress 0x0/0x0/0x0, omap 0x162ac, meta 0x2bb9d54), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:26.946249+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 21823488 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:27.946655+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139763 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 21823488 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:28.946915+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0x15ee0e5/0x16ba000, compress 0x0/0x0/0x0, omap 0x165e4, meta 0x2bb9a1c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 21823488 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:29.947215+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 21823488 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0x15ee0e5/0x16ba000, compress 0x0/0x0/0x0, omap 0x165e4, meta 0x2bb9a1c), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:30.947346+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 21823488 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:31.947612+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.603292465s of 10.371765137s, submitted: 24
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 21692416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:32.947836+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141571 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89554944 unmapped: 21602304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:33.947982+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb966000/0x0/0x4ffc00000, data 0x15fa4df/0x16c6000, compress 0x0/0x0/0x0, omap 0x15b4e, meta 0x2bba4b2), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89554944 unmapped: 21602304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:34.948208+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89554944 unmapped: 21602304 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb95a000/0x0/0x4ffc00000, data 0x1604739/0x16d2000, compress 0x0/0x0/0x0, omap 0x15c76, meta 0x2bba38a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:35.948346+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 21553152 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:36.948642+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 21528576 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:37.948799+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb958000/0x0/0x4ffc00000, data 0x1606484/0x16d4000, compress 0x0/0x0/0x0, omap 0x15c76, meta 0x2bba38a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150119 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 91766784 unmapped: 19390464 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:38.948959+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 91807744 unmapped: 19349504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:39.949145+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 91807744 unmapped: 19349504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:40.949319+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7b2000/0x0/0x4ffc00000, data 0x160bf06/0x16d9000, compress 0x0/0x0/0x0, omap 0x15ec6, meta 0x3d5a13a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 91807744 unmapped: 19349504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:41.949476+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 19193856 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:42.949669+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.220898628s of 10.809218407s, submitted: 45
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151423 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 19144704 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:43.949935+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 19103744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:44.950077+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 19070976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:45.950263+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa7a1000/0x0/0x4ffc00000, data 0x161d58b/0x16eb000, compress 0x0/0x0/0x0, omap 0x16360, meta 0x3d59ca0), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92086272 unmapped: 19070976 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:46.950476+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:47.950614+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fa79b000/0x0/0x4ffc00000, data 0x1623618/0x16f1000, compress 0x0/0x0/0x0, omap 0x177e8, meta 0x3d58818), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145857 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92168192 unmapped: 18989056 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:48.950762+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92168192 unmapped: 18989056 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:49.950898+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92168192 unmapped: 18989056 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:50.951058+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92094464 unmapped: 19062784 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:51.951243+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:52.951371+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa786000/0x0/0x4ffc00000, data 0x1634c3d/0x1704000, compress 0x0/0x0/0x0, omap 0x17b85, meta 0x3d5847b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153361 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:53.951640+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:54.951863+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:55.952071+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:56.952381+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.322482109s of 13.838923454s, submitted: 76
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 19030016 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:57.952592+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151461 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 19021824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:58.952792+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa780000/0x0/0x4ffc00000, data 0x163ce34/0x170c000, compress 0x0/0x0/0x0, omap 0x18139, meta 0x3d57ec7), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 19021824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:59.952987+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 19021824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:00.953206+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 19021824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:01.953374+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 19021824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:02.953693+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150749 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 18997248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:03.954671+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1642391/0x1711000, compress 0x0/0x0/0x0, omap 0x1825d, meta 0x3d57da3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 18997248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:04.954941+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 18997248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:05.955111+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 18997248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:06.955309+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92225536 unmapped: 18931712 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:07.955472+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151677 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.335715294s of 11.400582314s, submitted: 22
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92225536 unmapped: 18931712 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:08.955613+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92266496 unmapped: 18890752 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:09.955849+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa771000/0x0/0x4ffc00000, data 0x164c6a5/0x171b000, compress 0x0/0x0/0x0, omap 0x18381, meta 0x3d57c7f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 18849792 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:10.956060+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 18849792 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:11.956227+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa76c000/0x0/0x4ffc00000, data 0x1650eb6/0x1720000, compress 0x0/0x0/0x0, omap 0x18381, meta 0x3d57c7f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 18849792 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:12.956403+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151357 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 18849792 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:13.956628+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 18849792 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:14.957092+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa757000/0x0/0x4ffc00000, data 0x16649e6/0x1735000, compress 0x0/0x0/0x0, omap 0x18381, meta 0x3d57c7f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 18587648 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:15.957256+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa757000/0x0/0x4ffc00000, data 0x16649e6/0x1735000, compress 0x0/0x0/0x0, omap 0x185c9, meta 0x3d57a37), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 18546688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:16.957501+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 18546688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa754000/0x0/0x4ffc00000, data 0x16684a6/0x1738000, compress 0x0/0x0/0x0, omap 0x1865b, meta 0x3d579a5), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:17.957662+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156775 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92692480 unmapped: 18464768 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:18.957804+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92692480 unmapped: 18464768 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:19.958067+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92692480 unmapped: 18464768 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:20.958234+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.215084076s of 12.484299660s, submitted: 21
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 18415616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fa754000/0x0/0x4ffc00000, data 0x16684f5/0x1737000, compress 0x0/0x0/0x0, omap 0x18a69, meta 0x3d57597), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:21.958533+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 18415616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:22.958793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154761 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 18415616 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:23.959022+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 18341888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:24.959358+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 18341888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:25.959618+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 18341888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:26.959830+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 18407424 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa734000/0x0/0x4ffc00000, data 0x1685346/0x1756000, compress 0x0/0x0/0x0, omap 0x19058, meta 0x3d56fa8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:27.960005+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa737000/0x0/0x4ffc00000, data 0x1685375/0x1755000, compress 0x0/0x0/0x0, omap 0x19058, meta 0x3d56fa8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158377 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 18407424 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:28.960133+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 18161664 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:29.960292+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa722000/0x0/0x4ffc00000, data 0x1697b16/0x176a000, compress 0x0/0x0/0x0, omap 0x19058, meta 0x3d56fa8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 16932864 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:30.960491+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.362132072s of 10.002154350s, submitted: 62
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa710000/0x0/0x4ffc00000, data 0x16a7c94/0x177c000, compress 0x0/0x0/0x0, omap 0x19048, meta 0x3d56fb8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fa70b000/0x0/0x4ffc00000, data 0x16a9713/0x177f000, compress 0x0/0x0/0x0, omap 0x19653, meta 0x3d569ad), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 94068736 unmapped: 17088512 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:31.960686+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95125504 unmapped: 16031744 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:32.960858+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177593 data_alloc: 218103808 data_used: 12448
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 16424960 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:33.961110+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa6e1000/0x0/0x4ffc00000, data 0x16d1f68/0x17a9000, compress 0x0/0x0/0x0, omap 0x19f4e, meta 0x3d560b2), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 16220160 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:34.961372+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 16220160 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:35.961637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95076352 unmapped: 16080896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:36.961813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95076352 unmapped: 16080896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:37.962070+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180679 data_alloc: 218103808 data_used: 13098
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa6c8000/0x0/0x4ffc00000, data 0x16ed4e8/0x17c2000, compress 0x0/0x0/0x0, omap 0x1a5ef, meta 0x3d55a11), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95313920 unmapped: 15843328 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:38.962270+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 15728640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:39.962445+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 15720448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:40.962657+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.159467697s of 10.002428055s, submitted: 164
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 15720448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:41.962804+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0x170de74/0x17e3000, compress 0x0/0x0/0x0, omap 0x1af05, meta 0x3d550fb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95535104 unmapped: 15622144 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:42.963000+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184505 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95535104 unmapped: 15622144 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:43.963191+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95535104 unmapped: 15622144 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:44.963365+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 15564800 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:45.963643+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x171e003/0x17f3000, compress 0x0/0x0/0x0, omap 0x1b494, meta 0x3d54b6c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa697000/0x0/0x4ffc00000, data 0x171e003/0x17f3000, compress 0x0/0x0/0x0, omap 0x1b494, meta 0x3d54b6c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 15564800 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:46.963906+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 15548416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:47.964155+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184721 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 15548416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:48.964333+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa68d000/0x0/0x4ffc00000, data 0x17294f7/0x17ff000, compress 0x0/0x0/0x0, omap 0x1b78a, meta 0x3d54876), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 15548416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:49.964520+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 15425536 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:50.964686+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.851194382s of 10.002385139s, submitted: 68
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 14393344 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:51.964892+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 14393344 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:52.965046+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190295 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 14393344 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:53.965238+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 14393344 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa668000/0x0/0x4ffc00000, data 0x174b50c/0x1822000, compress 0x0/0x0/0x0, omap 0x1be4f, meta 0x3d541b1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:54.965474+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 14393344 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:55.965652+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 14254080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:56.965871+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 14254080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:57.966025+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa664000/0x0/0x4ffc00000, data 0x1751073/0x1828000, compress 0x0/0x0/0x0, omap 0x1be4f, meta 0x3d541b1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187799 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 14254080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:58.966181+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 14254080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:59.966307+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa664000/0x0/0x4ffc00000, data 0x1751073/0x1828000, compress 0x0/0x0/0x0, omap 0x1be4f, meta 0x3d541b1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96903168 unmapped: 14254080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:00.966477+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.330538750s of 10.458660126s, submitted: 28
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 14245888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:01.966583+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96911360 unmapped: 14245888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:02.966739+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190047 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96976896 unmapped: 14180352 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:03.966882+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa655000/0x0/0x4ffc00000, data 0x17608d2/0x1837000, compress 0x0/0x0/0x0, omap 0x1c005, meta 0x3d53ffb), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97009664 unmapped: 14147584 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:04.967072+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97009664 unmapped: 14147584 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:05.967279+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa655000/0x0/0x4ffc00000, data 0x17608d2/0x1837000, compress 0x0/0x0/0x0, omap 0x1c005, meta 0x3d53ffb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 13934592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:06.967428+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 13934592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:07.967793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191359 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:08.967884+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 13926400 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:09.968037+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97353728 unmapped: 13803520 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa63c000/0x0/0x4ffc00000, data 0x1779927/0x1850000, compress 0x0/0x0/0x0, omap 0x1c3ba, meta 0x3d53c46), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:10.968164+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97353728 unmapped: 13803520 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.541662216s of 10.298975945s, submitted: 19
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:11.968295+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 14303232 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:12.968433+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 14303232 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa632000/0x0/0x4ffc00000, data 0x17832d6/0x185a000, compress 0x0/0x0/0x0, omap 0x1c3ba, meta 0x3d53c46), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190963 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:13.968589+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 14303232 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:14.968721+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 14286848 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa632000/0x0/0x4ffc00000, data 0x17832d6/0x185a000, compress 0x0/0x0/0x0, omap 0x1c3ba, meta 0x3d53c46), peers [0,2] op hist [0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:15.968839+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96886784 unmapped: 14270464 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa627000/0x0/0x4ffc00000, data 0x178e31c/0x1865000, compress 0x0/0x0/0x0, omap 0x1c570, meta 0x3d53a90), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:16.969002+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96927744 unmapped: 14229504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:17.969117+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96927744 unmapped: 14229504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa624000/0x0/0x4ffc00000, data 0x1790e2e/0x1868000, compress 0x0/0x0/0x0, omap 0x1c64b, meta 0x3d539b5), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191559 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:18.969256+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 14163968 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:19.969409+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97001472 unmapped: 14155776 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:20.969669+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97001472 unmapped: 14155776 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa617000/0x0/0x4ffc00000, data 0x179e630/0x1875000, compress 0x0/0x0/0x0, omap 0x1c7b8, meta 0x3d53848), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.810370445s of 10.000174522s, submitted: 21
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:21.969806+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 14475264 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:22.969963+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 14475264 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193471 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:23.970122+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 14475264 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:24.970246+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 14344192 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa5f2000/0x0/0x4ffc00000, data 0x17c3444/0x189a000, compress 0x0/0x0/0x0, omap 0x1c9b7, meta 0x3d53649), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:25.970398+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 14303232 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:26.970604+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 14303232 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:27.970868+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 14213120 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192739 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:28.971016+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 13991936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:29.971193+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 13991936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:30.971785+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 13991936 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa5dc000/0x0/0x4ffc00000, data 0x17d84ec/0x18b0000, compress 0x0/0x0/0x0, omap 0x1cf6b, meta 0x3d53095), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.809724808s of 10.000087738s, submitted: 27
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:31.971927+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 13975552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:32.972224+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97181696 unmapped: 13975552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199469 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:33.972462+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:34.972702+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:35.972986+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa5c9000/0x0/0x4ffc00000, data 0x17e86c1/0x18c1000, compress 0x0/0x0/0x0, omap 0x1d270, meta 0x3d52d90), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:36.973195+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:37.973520+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197349 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:38.973760+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa5c6000/0x0/0x4ffc00000, data 0x17ed4ec/0x18c6000, compress 0x0/0x0/0x0, omap 0x1d270, meta 0x3d52d90), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:39.974058+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:40.974334+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 14057472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.831305504s of 10.000946045s, submitted: 52
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:41.974672+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 13008896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:42.974951+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 13008896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa5bf000/0x0/0x4ffc00000, data 0x17f1038/0x18cb000, compress 0x0/0x0/0x0, omap 0x1d5ad, meta 0x3d52a53), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200699 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:43.975138+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 13008896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:44.975371+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 13008896 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:45.975527+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 12779520 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa5b0000/0x0/0x4ffc00000, data 0x1802326/0x18dc000, compress 0x0/0x0/0x0, omap 0x1d7ac, meta 0x3d52854), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:46.975901+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 12771328 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:47.976247+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 12763136 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa5ad000/0x0/0x4ffc00000, data 0x180512f/0x18df000, compress 0x0/0x0/0x0, omap 0x1d7c7, meta 0x3d52839), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201715 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:48.976444+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 12673024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:49.976618+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 12673024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa5ad000/0x0/0x4ffc00000, data 0x180512f/0x18df000, compress 0x0/0x0/0x0, omap 0x1d7c7, meta 0x3d52839), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:50.976793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 12558336 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.325724602s of 10.084680557s, submitted: 15
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:51.976975+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12550144 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:52.977239+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12550144 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa590000/0x0/0x4ffc00000, data 0x1821d55/0x18fc000, compress 0x0/0x0/0x0, omap 0x1db18, meta 0x3d524e8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204475 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:53.977451+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa590000/0x0/0x4ffc00000, data 0x1821d55/0x18fc000, compress 0x0/0x0/0x0, omap 0x1db18, meta 0x3d524e8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12550144 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:54.977597+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 12566528 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:55.977851+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 12566528 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:56.978105+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12492800 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:57.978322+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 12787712 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa57a000/0x0/0x4ffc00000, data 0x183775e/0x1912000, compress 0x0/0x0/0x0, omap 0x1dbaa, meta 0x3d52456), peers [0,2] op hist [0,0,0,0,0,0,0,0,1,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204539 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:58.978546+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 12689408 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:59.978770+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12582912 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x5623553bcc00
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 14
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:00.978936+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 12664832 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:01.979101+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.694408417s of 10.133299828s, submitted: 42
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 12664832 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:02.979300+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 12664832 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211847 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:03.979459+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa545000/0x0/0x4ffc00000, data 0x186a0fa/0x1947000, compress 0x0/0x0/0x0, omap 0x1e35d, meta 0x3d51ca3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 12664832 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:04.979602+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 13058048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:05.979744+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 13058048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:06.979963+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0x187a0d7/0x1956000, compress 0x0/0x0/0x0, omap 0x1e34f, meta 0x3d51cb1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98115584 unmapped: 13041664 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:07.980203+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98115584 unmapped: 13041664 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213071 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:08.980359+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98222080 unmapped: 12935168 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa529000/0x0/0x4ffc00000, data 0x188685c/0x1963000, compress 0x0/0x0/0x0, omap 0x1e34f, meta 0x3d51cb1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:09.980526+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98254848 unmapped: 12902400 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:10.980660+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98254848 unmapped: 12902400 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:11.980910+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.528689384s of 10.040594101s, submitted: 28
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 12787712 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:12.981242+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 12681216 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:13.981420+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212103 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa514000/0x0/0x4ffc00000, data 0x189b9cb/0x1978000, compress 0x0/0x0/0x0, omap 0x1e95a, meta 0x3d516a6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 12673024 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:14.981692+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 12656640 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:15.981837+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99663872 unmapped: 11493376 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:16.981999+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 11354112 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:17.982232+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 11337728 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:18.982439+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216063 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99991552 unmapped: 11165696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4ec000/0x0/0x4ffc00000, data 0x18c3083/0x19a0000, compress 0x0/0x0/0x0, omap 0x1ed0f, meta 0x3d512f1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:19.982647+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 11132928 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:20.982829+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 11132928 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:21.983003+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.497561455s of 10.002604485s, submitted: 31
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 11517952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:22.983153+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 11517952 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa4c2000/0x0/0x4ffc00000, data 0x18ec006/0x19ca000, compress 0x0/0x0/0x0, omap 0x1f032, meta 0x3d50fce), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:23.983308+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221027 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99647488 unmapped: 11509760 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:24.983444+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 11395072 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:25.983635+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 11395072 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:26.983813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99893248 unmapped: 11264000 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:27.983948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100089856 unmapped: 11067392 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa47f000/0x0/0x4ffc00000, data 0x192d989/0x1a0b000, compress 0x0/0x0/0x0, omap 0x1faac, meta 0x3d50554), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:28.984117+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222747 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa46c000/0x0/0x4ffc00000, data 0x19437af/0x1a20000, compress 0x0/0x0/0x0, omap 0x1faac, meta 0x3d50554), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 11157504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:29.984283+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100106240 unmapped: 11051008 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:30.984430+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100057088 unmapped: 11100160 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:31.984582+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791056633s of 10.002357483s, submitted: 108
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x1970d39/0x1a4f000, compress 0x0/0x0/0x0, omap 0x201b9, meta 0x3d4fe47), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 11010048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:32.984771+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 11010048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:33.984939+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226505 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 11624448 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:34.985148+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99581952 unmapped: 11575296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:35.990781+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1970e03/0x1a4f000, compress 0x0/0x0/0x0, omap 0x20401, meta 0x3d4fbff), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 99581952 unmapped: 11575296 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:36.991076+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 10395648 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:37.993194+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 10395648 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:38.994884+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231141 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 10305536 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:39.996363+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x19a73d3/0x1a85000, compress 0x0/0x0/0x0, omap 0x20848, meta 0x3d4f7b8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 10289152 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:40.996690+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 10190848 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:41.997855+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.041943550s of 10.005606651s, submitted: 31
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 10190848 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:42.998175+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 10035200 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:43.998627+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237487 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 9994240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:44.999221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0x19eaa85/0x1aca000, compress 0x0/0x0/0x0, omap 0x20f69, meta 0x3d4f097), peers [0,2] op hist [0,0,0,0,1,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9928704 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:46.000011+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101408768 unmapped: 9748480 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:47.000291+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa3a8000/0x0/0x4ffc00000, data 0x1a057d8/0x1ae4000, compress 0x0/0x0/0x0, omap 0x211b1, meta 0x3d4ee4f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 9953280 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:48.000545+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 9928704 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:49.000747+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa386000/0x0/0x4ffc00000, data 0x1a27ddb/0x1b06000, compress 0x0/0x0/0x0, omap 0x21367, meta 0x3d4ec99), peers [0,2] op hist [0,3])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248267 data_alloc: 218103808 data_used: 13903
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 9543680 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:50.001317+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 8478720 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa35d000/0x0/0x4ffc00000, data 0x1a4e6ab/0x1b2e000, compress 0x0/0x0/0x0, omap 0x2168a, meta 0x3d4e976), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:51.001521+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102678528 unmapped: 8478720 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:52.001719+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.922562122s of 10.004051208s, submitted: 73
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103292928 unmapped: 7864320 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:53.001988+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 8331264 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:54.002242+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251281 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 8331264 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:55.002455+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 8093696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:56.002716+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa307000/0x0/0x4ffc00000, data 0x1aa655c/0x1b85000, compress 0x0/0x0/0x0, omap 0x21e8d, meta 0x3d4e173), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 8093696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:57.003012+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 7995392 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:58.003231+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 8192000 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:59.003402+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255229 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 8192000 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa2be000/0x0/0x4ffc00000, data 0x1aed104/0x1bcd000, compress 0x0/0x0/0x0, omap 0x22274, meta 0x3d4dd8c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:00.003542+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 7962624 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:01.003767+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102801408 unmapped: 8355840 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa276000/0x0/0x4ffc00000, data 0x1b30142/0x1c12000, compress 0x0/0x0/0x0, omap 0x22920, meta 0x3d4d6e0), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:02.004018+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.581274033s of 10.003578186s, submitted: 145
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa276000/0x0/0x4ffc00000, data 0x1b30142/0x1c12000, compress 0x0/0x0/0x0, omap 0x22920, meta 0x3d4d6e0), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102801408 unmapped: 8355840 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:03.004240+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102801408 unmapped: 8355840 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:04.004458+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260297 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 102850560 unmapped: 8306688 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:05.004830+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 7208960 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa25b000/0x0/0x4ffc00000, data 0x1b4eb93/0x1c30000, compress 0x0/0x0/0x0, omap 0x22845, meta 0x3d4d7bb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:06.004996+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa25b000/0x0/0x4ffc00000, data 0x1b4eb93/0x1c30000, compress 0x0/0x0/0x0, omap 0x22845, meta 0x3d4d7bb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 7208960 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:07.005174+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 7118848 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:08.005294+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa22b000/0x0/0x4ffc00000, data 0x1b7f974/0x1c61000, compress 0x0/0x0/0x0, omap 0x22965, meta 0x3d4d69b), peers [0,2] op hist [0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 7110656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:09.005742+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa22a000/0x0/0x4ffc00000, data 0x1b8210d/0x1c62000, compress 0x0/0x0/0x0, omap 0x22a85, meta 0x3d4d57b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262667 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 6971392 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:10.006251+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 6643712 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:11.006544+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 ms_handle_reset con 0x5623553bcc00 session 0x5623558da380
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 6373376 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:12.006640+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.215323448s of 10.002745628s, submitted: 258
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 6373376 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1f3000/0x0/0x4ffc00000, data 0x1bb997d/0x1c99000, compress 0x0/0x0/0x0, omap 0x22e75, meta 0x3d4d18b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:13.006752+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 15
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 6365184 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:14.007664+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259487 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 6365184 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:15.008082+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 6365184 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:16.008507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 6365184 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:17.009101+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1f3000/0x0/0x4ffc00000, data 0x1bb99e2/0x1c99000, compress 0x0/0x0/0x0, omap 0x22fdd, meta 0x3d4d023), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104841216 unmapped: 6316032 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:18.009695+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1df000/0x0/0x4ffc00000, data 0x1bcd445/0x1cad000, compress 0x0/0x0/0x0, omap 0x230fd, meta 0x3d4cf03), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104841216 unmapped: 6316032 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:19.010081+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262115 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 6234112 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:20.010325+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 6234112 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:21.010742+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 6234112 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1df000/0x0/0x4ffc00000, data 0x1bcd445/0x1cad000, compress 0x0/0x0/0x0, omap 0x230fd, meta 0x3d4cf03), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:22.011108+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.594175339s of 10.611527443s, submitted: 11
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 6094848 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:23.011390+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1d0000/0x0/0x4ffc00000, data 0x1bdcc3d/0x1cbc000, compress 0x0/0x0/0x0, omap 0x230fd, meta 0x3d4cf03), peers [0,2] op hist [0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 6086656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:24.011598+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262171 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 6086656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:25.011860+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 6086656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:26.012152+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 6086656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:27.012498+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 6086656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x1be364c/0x1cc3000, compress 0x0/0x0/0x0, omap 0x232ad, meta 0x3d4cd53), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:28.012758+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105103360 unmapped: 6053888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:29.013067+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262179 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105103360 unmapped: 6053888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:30.013257+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1c9000/0x0/0x4ffc00000, data 0x1be364c/0x1cc3000, compress 0x0/0x0/0x0, omap 0x2345d, meta 0x3d4cba3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 6045696 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:31.013548+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa1a2000/0x0/0x4ffc00000, data 0x1c09b7a/0x1cea000, compress 0x0/0x0/0x0, omap 0x235c5, meta 0x3d4ca3b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 5996544 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:32.013899+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 4907008 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa180000/0x0/0x4ffc00000, data 0x1c2bb99/0x1d0c000, compress 0x0/0x0/0x0, omap 0x23c84, meta 0x3d4c37c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:33.014158+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 4907008 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.851778984s of 11.339630127s, submitted: 34
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:34.014278+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269343 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 5873664 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:35.014465+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 6111232 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:36.014641+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 6070272 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa12e000/0x0/0x4ffc00000, data 0x1c7c89d/0x1d5e000, compress 0x0/0x0/0x0, omap 0x24039, meta 0x3d4bfc7), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:37.014937+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105431040 unmapped: 5726208 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:38.015090+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105431040 unmapped: 5726208 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:39.015244+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279799 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 105816064 unmapped: 5341184 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:40.015524+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa0cc000/0x0/0x4ffc00000, data 0x1cdedf0/0x1dc0000, compress 0x0/0x0/0x0, omap 0x24238, meta 0x3d4bdc8), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 4284416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa0c9000/0x0/0x4ffc00000, data 0x1ce165b/0x1dc3000, compress 0x0/0x0/0x0, omap 0x24345, meta 0x3d4bcbb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:41.015739+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 4046848 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:42.015924+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cfafbd/0x1ddb000, compress 0x0/0x0/0x0, omap 0x2441d, meta 0x3d4bbe3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 4038656 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:43.016103+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x1cfafec/0x1dda000, compress 0x0/0x0/0x0, omap 0x2453d, meta 0x3d4bac3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 3850240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:44.016316+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276999 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 3850240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:45.016645+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 3850240 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:46.016804+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.080474854s of 12.772921562s, submitted: 76
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106397696 unmapped: 4759552 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:47.017014+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106127360 unmapped: 5029888 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa0a2000/0x0/0x4ffc00000, data 0x1d0b033/0x1dea000, compress 0x0/0x0/0x0, omap 0x24615, meta 0x3d4b9eb), peers [0,2] op hist [0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:48.017195+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 5013504 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa09f000/0x0/0x4ffc00000, data 0x1d0da94/0x1ded000, compress 0x0/0x0/0x0, omap 0x2477d, meta 0x3d4b883), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:49.017356+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276391 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 4931584 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:50.017546+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 4931584 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:51.017866+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa09f000/0x0/0x4ffc00000, data 0x1d0da94/0x1ded000, compress 0x0/0x0/0x0, omap 0x2477d, meta 0x3d4b883), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa09f000/0x0/0x4ffc00000, data 0x1d0da94/0x1ded000, compress 0x0/0x0/0x0, omap 0x2477d, meta 0x3d4b883), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 4890624 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:52.018084+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 4866048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:53.018286+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 4866048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:54.018451+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280499 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa07e000/0x0/0x4ffc00000, data 0x1d2ce51/0x1e0d000, compress 0x0/0x0/0x0, omap 0x248e5, meta 0x3d4b71b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 4866048 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:55.018658+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106348544 unmapped: 4808704 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:56.018836+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa07e000/0x0/0x4ffc00000, data 0x1d2ce51/0x1e0d000, compress 0x0/0x0/0x0, omap 0x248e5, meta 0x3d4b71b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.509513855s of 10.001594543s, submitted: 37
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 4382720 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:57.019020+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 4382720 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:58.019173+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106332160 unmapped: 4825088 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:59.019364+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280413 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 4661248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:00.019534+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 4661248 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:01.019744+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa02b000/0x0/0x4ffc00000, data 0x1d81e27/0x1e61000, compress 0x0/0x0/0x0, omap 0x24df5, meta 0x3d4b20b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 4587520 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:02.019930+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2829 syncs, 3.68 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3207 writes, 10K keys, 3207 commit groups, 1.0 writes per commit group, ingest: 13.35 MB, 0.02 MB/s
                                           Interval WAL: 3207 writes, 1398 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 4587520 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:03.020136+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 3522560 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:04.020371+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287265 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 3309568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:05.020652+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 3309568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:06.020830+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.617146492s of 10.000217438s, submitted: 33
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 3309568 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x1daa79b/0x1e8b000, compress 0x0/0x0/0x0, omap 0x24f5d, meta 0x3d4b0a3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:07.021042+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 3031040 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:08.021283+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 3031040 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:09.021509+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 ms_handle_reset con 0x5623529fb000 session 0x5623517f9180
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x5623553bc400
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc ms_handle_reset ms_handle_reset con 0x562353f25000
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: get_auth_request con 0x5623566f5c00 auth_method 0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_configure stats_period=5
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296653 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 ms_handle_reset con 0x562353ab0400 session 0x56235320d6c0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562353ab0c00
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 2793472 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 ms_handle_reset con 0x562353fe7000 session 0x56235562f340
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562353ab0400
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:10.021697+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9fad000/0x0/0x4ffc00000, data 0x1dfde07/0x1ede000, compress 0x0/0x0/0x0, omap 0x253dd, meta 0x3d4ac23), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 3661824 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:11.021842+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9f8f000/0x0/0x4ffc00000, data 0x1e1a5d0/0x1efc000, compress 0x0/0x0/0x0, omap 0x2546d, meta 0x3d4ab93), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 3653632 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:12.022048+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 3653632 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:13.022277+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 3653632 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:14.022484+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303679 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 3497984 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:15.022642+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x1e3f021/0x1f21000, compress 0x0/0x0/0x0, omap 0x25665, meta 0x3d4a99b), peers [0,2] op hist [0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 2211840 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:16.022960+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.543613434s of 10.001773834s, submitted: 86
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 1966080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:17.023150+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 1966080 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:18.023448+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 1646592 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:19.023872+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313377 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 1212416 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:20.024049+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9ecb000/0x0/0x4ffc00000, data 0x1edd48f/0x1fbf000, compress 0x0/0x0/0x0, omap 0x25a55, meta 0x3d4a5ab), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 109674496 unmapped: 1482752 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:21.024192+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 109674496 unmapped: 1482752 heap: 111157248 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:22.024327+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x1f02bdb/0x1fe6000, compress 0x0/0x0/0x0, omap 0x25bbd, meta 0x3d4a443), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 1400832 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:23.024675+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 1146880 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:24.024954+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f43b33/0x2027000, compress 0x0/0x0/0x0, omap 0x26104, meta 0x3d49efc), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316349 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111075328 unmapped: 1130496 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:25.025221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111558656 unmapped: 647168 heap: 112205824 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:26.025410+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.560068130s of 10.000738144s, submitted: 115
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 1613824 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:27.025590+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 1613824 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:28.025728+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:29.025920+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1589248 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9def000/0x0/0x4ffc00000, data 0x1fbb1d2/0x209d000, compress 0x0/0x0/0x0, omap 0x268b7, meta 0x3d49749), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321983 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:30.026156+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 532480 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:31.026419+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 442368 heap: 113254400 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:32.026660+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111583232 unmapped: 2719744 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:33.026861+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111583232 unmapped: 2719744 heap: 114302976 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9d95000/0x0/0x4ffc00000, data 0x201580e/0x20f7000, compress 0x0/0x0/0x0, omap 0x268f5, meta 0x3d4970b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:34.027032+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 3719168 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323113 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:35.027228+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 3350528 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:36.027361+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 3350528 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9d3d000/0x0/0x4ffc00000, data 0x206cd5c/0x214f000, compress 0x0/0x0/0x0, omap 0x26c55, meta 0x3d493ab), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.311173916s of 10.000031471s, submitted: 90
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:37.027623+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 3342336 heap: 115351552 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:38.027788+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 3342336 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:39.027979+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 3334144 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333209 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:40.028159+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 113147904 unmapped: 3252224 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x20d9ae6/0x21ba000, compress 0x0/0x0/0x0, omap 0x27045, meta 0x3d48fbb), peers [0,2] op hist [0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:41.028328+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 2187264 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:42.028471+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 2080768 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:43.028637+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 1990656 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:44.028793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 1728512 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x2119993/0x21f9000, compress 0x0/0x0/0x0, omap 0x2750d, meta 0x3d48af3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331943 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:45.028948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 1712128 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:46.029119+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 1712128 heap: 116400128 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.985103607s of 10.003266335s, submitted: 80
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:47.029316+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 3309568 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:48.029484+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 3309568 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9c5b000/0x0/0x4ffc00000, data 0x21518dd/0x2231000, compress 0x0/0x0/0x0, omap 0x277dd, meta 0x3d48823), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:49.029654+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 3301376 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337155 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:50.029805+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 2072576 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f8a89000/0x0/0x4ffc00000, data 0x2183710/0x2263000, compress 0x0/0x0/0x0, omap 0x2786d, meta 0x4ee8793), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:51.029949+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 2072576 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 ms_handle_reset con 0x562354091c00 session 0x562353f58380
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562354024400
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:52.030099+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 2072576 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:53.030410+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 1875968 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:54.030649+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 2048000 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f8a5e000/0x0/0x4ffc00000, data 0x21aee48/0x228e000, compress 0x0/0x0/0x0, omap 0x27bcd, meta 0x4ee8433), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336823 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:55.030900+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 2039808 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:56.031086+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 3145728 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.847293854s of 10.000188828s, submitted: 46
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:57.031261+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 3129344 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:58.031390+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 3129344 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f8a4b000/0x0/0x4ffc00000, data 0x21bf0dc/0x22a0000, compress 0x0/0x0/0x0, omap 0x27f2d, meta 0x4ee80d3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:59.031550+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 3055616 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336919 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:00.031771+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 3039232 heap: 117448704 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:01.031946+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 4005888 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:02.032091+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 3997696 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a47000/0x0/0x4ffc00000, data 0x21c0dab/0x22a3000, compress 0x0/0x0/0x0, omap 0x28752, meta 0x4ee78ae), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:03.032251+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3989504 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:04.032378+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3989504 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340397 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:05.032615+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3989504 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a47000/0x0/0x4ffc00000, data 0x21c0dab/0x22a3000, compress 0x0/0x0/0x0, omap 0x2894a, meta 0x4ee76b6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:06.032794+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3989504 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:07.032950+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881115913s of 10.298089027s, submitted: 160
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 3989504 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:08.033078+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 3981312 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:09.033244+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 3981312 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340397 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:10.033419+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 3981312 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:11.033638+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f8a48000/0x0/0x4ffc00000, data 0x21c0f08/0x22a3000, compress 0x0/0x0/0x0, omap 0x29202, meta 0x4ee6dfe), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:12.033774+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 3956736 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:13.033911+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 3956736 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:14.034112+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 3956736 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342277 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:15.034289+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a47000/0x0/0x4ffc00000, data 0x21c29ee/0x22a5000, compress 0x0/0x0/0x0, omap 0x29a06, meta 0x4ee65fa), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:16.034463+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:17.034758+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:18.034957+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.733421326s of 11.002865791s, submitted: 38
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:19.035140+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343969 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:20.035354+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a45000/0x0/0x4ffc00000, data 0x21c2ab7/0x22a6000, compress 0x0/0x0/0x0, omap 0x29f16, meta 0x4ee60ea), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:21.035519+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 3940352 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:22.035656+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 3940352 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a45000/0x0/0x4ffc00000, data 0x21c2ab7/0x22a6000, compress 0x0/0x0/0x0, omap 0x29f16, meta 0x4ee60ea), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:23.035882+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 3940352 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:24.036129+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 3940352 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a46000/0x0/0x4ffc00000, data 0x21c2ab5/0x22a6000, compress 0x0/0x0/0x0, omap 0x2a22e, meta 0x4ee5dd2), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343235 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:25.036278+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114565120 unmapped: 3932160 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:26.036461+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114565120 unmapped: 3932160 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 16
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562354024800
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:27.036673+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:28.036793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a44000/0x0/0x4ffc00000, data 0x21c2c23/0x22a7000, compress 0x0/0x0/0x0, omap 0x2a666, meta 0x4ee599a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.321982384s of 10.679874420s, submitted: 18
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:29.036974+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345373 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:30.037107+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:31.037296+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:32.037472+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a42000/0x0/0x4ffc00000, data 0x21c2ef4/0x22a9000, compress 0x0/0x0/0x0, omap 0x2a936, meta 0x4ee56ca), peers [0,2] op hist [0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:33.037645+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:34.037824+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348597 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:35.037992+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:36.038148+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a43000/0x0/0x4ffc00000, data 0x21c2f59/0x22a9000, compress 0x0/0x0/0x0, omap 0x2aae6, meta 0x4ee551a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:37.038408+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114524160 unmapped: 3973120 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:38.038612+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114532352 unmapped: 3964928 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903714180s of 10.008358955s, submitted: 22
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:39.038776+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 3956736 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347465 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:40.038941+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 3956736 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a43000/0x0/0x4ffc00000, data 0x21c2e4e/0x22a7000, compress 0x0/0x0/0x0, omap 0x2b0ce, meta 0x4ee4f32), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:41.039058+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:42.039221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8a43000/0x0/0x4ffc00000, data 0x21c2f7c/0x22a8000, compress 0x0/0x0/0x0, omap 0x2b3e6, meta 0x4ee4c1a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:43.039404+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 3948544 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:44.039618+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 2891776 heap: 118497280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f78a3000/0x0/0x4ffc00000, data 0x21c30df/0x22a9000, compress 0x0/0x0/0x0, omap 0x2b866, meta 0x608479a), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349971 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:45.039787+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2891776 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:46.039919+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2891776 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:47.040106+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2891776 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f78a3000/0x0/0x4ffc00000, data 0x21c320e/0x22a9000, compress 0x0/0x0/0x0, omap 0x2be06, meta 0x60841fa), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:48.040303+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 2957312 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:49.040464+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.675122261s of 10.319170952s, submitted: 58
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 2957312 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353801 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:50.040590+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 2957312 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:51.040728+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 2957312 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:52.041009+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:53.041133+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789e000/0x0/0x4ffc00000, data 0x21c688e/0x22ac000, compress 0x0/0x0/0x0, omap 0x2ca65, meta 0x608359b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789e000/0x0/0x4ffc00000, data 0x21c688e/0x22ac000, compress 0x0/0x0/0x0, omap 0x2ca65, meta 0x608359b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:54.041275+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356797 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:55.041451+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:56.041588+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:57.041809+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c698e/0x22ad000, compress 0x0/0x0/0x0, omap 0x2ce55, meta 0x60831ab), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:58.041962+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:59.042103+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:00.042254+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356797 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.284742355s of 11.001876831s, submitted: 25
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:01.042426+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:02.042607+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:03.042781+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c698e/0x22ad000, compress 0x0/0x0/0x0, omap 0x2d005, meta 0x6082ffb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2940928 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c698e/0x22ad000, compress 0x0/0x0/0x0, omap 0x2d095, meta 0x6082f6b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:04.042942+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 2932736 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c69f3/0x22ad000, compress 0x0/0x0/0x0, omap 0x2d365, meta 0x6082c9b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:05.043092+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356797 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 2932736 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:06.043301+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 2932736 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c6a58/0x22ad000, compress 0x0/0x0/0x0, omap 0x2d43d, meta 0x6082bc3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:07.043601+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 2932736 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c6a58/0x22ad000, compress 0x0/0x0/0x0, omap 0x2d5ed, meta 0x6082a13), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:08.043773+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 2932736 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:09.043948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 2932736 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:10.044131+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356653 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.822517395s of 10.003097534s, submitted: 12
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 2924544 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:11.044324+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 2924544 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:12.044488+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f78a0000/0x0/0x4ffc00000, data 0x21c6aec/0x22ac000, compress 0x0/0x0/0x0, omap 0x2d9dd, meta 0x6082623), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 2924544 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f78a0000/0x0/0x4ffc00000, data 0x21c6aec/0x22ac000, compress 0x0/0x0/0x0, omap 0x2dafd, meta 0x6082503), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:13.044640+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 2924544 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:14.044794+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 2916352 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:15.045128+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355489 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 2908160 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:16.045284+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 2908160 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f78a1000/0x0/0x4ffc00000, data 0x21c6b86/0x22ab000, compress 0x0/0x0/0x0, omap 0x2e205, meta 0x6081dfb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:17.045476+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 2908160 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:18.045614+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f78a1000/0x0/0x4ffc00000, data 0x21c6cb5/0x22ab000, compress 0x0/0x0/0x0, omap 0x2e565, meta 0x6081a9b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:19.045789+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:20.045980+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.948834419s of 10.002258301s, submitted: 22
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355665 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:21.046127+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:22.046327+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:23.046434+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f78a0000/0x0/0x4ffc00000, data 0x21c6ee4/0x22ac000, compress 0x0/0x0/0x0, omap 0x2ee1d, meta 0x60811e3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:24.046552+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 2899968 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:25.046715+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357277 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2891776 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f789f000/0x0/0x4ffc00000, data 0x21c6fae/0x22ad000, compress 0x0/0x0/0x0, omap 0x2eead, meta 0x6081153), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:26.046833+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2891776 heap: 119545856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:27.047018+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 3506176 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:28.047177+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 3497984 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:29.047356+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 3448832 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:30.047487+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.328453064s of 10.002122879s, submitted: 42
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364033 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 3440640 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f780f000/0x0/0x4ffc00000, data 0x2257e41/0x233d000, compress 0x0/0x0/0x0, omap 0x2f56d, meta 0x6080a93), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:31.047616+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 3334144 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:32.047736+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 2809856 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:33.047857+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 2809856 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:34.047989+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 2809856 heap: 120594432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f77f2000/0x0/0x4ffc00000, data 0x2274de3/0x235a000, compress 0x0/0x0/0x0, omap 0x2f9a5, meta 0x608065b), peers [0,2] op hist [0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:35.048107+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370219 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 3735552 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:36.048221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 3801088 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:37.048373+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 117841920 unmapped: 3801088 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:38.048504+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 3629056 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:39.048694+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 3629056 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7786000/0x0/0x4ffc00000, data 0x22e0aba/0x23c6000, compress 0x0/0x0/0x0, omap 0x2fe6d, meta 0x6080193), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:40.048963+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.420964718s of 10.002225876s, submitted: 53
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371371 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 3506176 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:41.049113+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 ms_handle_reset con 0x562354024800 session 0x5623555d0fc0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 2957312 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7778000/0x0/0x4ffc00000, data 0x22ee3d0/0x23d4000, compress 0x0/0x0/0x0, omap 0x300f5, meta 0x607ff0b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:42.049338+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 2859008 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 17
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:43.049481+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 2727936 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:44.049664+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7734000/0x0/0x4ffc00000, data 0x23330a3/0x2418000, compress 0x0/0x0/0x0, omap 0x30185, meta 0x607fe7b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 2400256 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:45.049810+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376299 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 2367488 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:46.049966+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 2367488 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:47.050237+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 1556480 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:48.050488+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f76e4000/0x0/0x4ffc00000, data 0x2382028/0x2468000, compress 0x0/0x0/0x0, omap 0x303c5, meta 0x607fc3b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 1540096 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:49.050658+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 1540096 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:50.050801+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.851744652s of 10.002460480s, submitted: 249
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380079 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 2146304 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:51.051045+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f76bd000/0x0/0x4ffc00000, data 0x23a8102/0x248f000, compress 0x0/0x0/0x0, omap 0x30a3d, meta 0x607f5c3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119504896 unmapped: 2138112 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:52.051531+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7672000/0x0/0x4ffc00000, data 0x23f2b1d/0x24da000, compress 0x0/0x0/0x0, omap 0x30d55, meta 0x607f2ab), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119504896 unmapped: 2138112 heap: 121643008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:53.051740+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 2809856 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:54.051979+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 2809856 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7629000/0x0/0x4ffc00000, data 0x243d0c9/0x2523000, compress 0x0/0x0/0x0, omap 0x310fd, meta 0x607ef03), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:55.052125+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393225 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 120930304 unmapped: 1761280 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7618000/0x0/0x4ffc00000, data 0x244cea4/0x2534000, compress 0x0/0x0/0x0, omap 0x313cd, meta 0x607ec33), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:56.052313+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1400832 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:57.052655+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121102336 unmapped: 1589248 heap: 122691584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:58.052816+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121110528 unmapped: 2629632 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:59.052977+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 2605056 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:00.053163+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.776637077s of 10.002527237s, submitted: 87
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395619 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121143296 unmapped: 2596864 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f758b000/0x0/0x4ffc00000, data 0x24da75f/0x25c1000, compress 0x0/0x0/0x0, omap 0x31a8d, meta 0x607e573), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:01.053361+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121479168 unmapped: 2260992 heap: 123740160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:02.053516+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7541000/0x0/0x4ffc00000, data 0x2525fb8/0x260b000, compress 0x0/0x0/0x0, omap 0x31d5d, meta 0x607e2a3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 2523136 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7541000/0x0/0x4ffc00000, data 0x2525fb8/0x260b000, compress 0x0/0x0/0x0, omap 0x31e35, meta 0x607e1cb), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:03.053692+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 2523136 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:04.053835+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 2523136 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:05.053986+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404791 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122404864 unmapped: 2383872 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:06.054157+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 2342912 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f74d9000/0x0/0x4ffc00000, data 0x258c60f/0x2673000, compress 0x0/0x0/0x0, omap 0x32575, meta 0x607da8b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:07.054421+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122568704 unmapped: 2220032 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:08.054622+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 2899968 heap: 124788736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:09.054845+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 3940352 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:10.055047+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.662630081s of 10.003691673s, submitted: 127
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414219 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 3940352 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:11.055215+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 3809280 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f7479000/0x0/0x4ffc00000, data 0x25ec1b7/0x26d3000, compress 0x0/0x0/0x0, omap 0x32b1a, meta 0x607d4e6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:12.055391+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 2703360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:13.055524+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 2703360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f744b000/0x0/0x4ffc00000, data 0x2615f9f/0x26ff000, compress 0x0/0x0/0x0, omap 0x32db2, meta 0x607d24e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:14.055680+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 2703360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f744b000/0x0/0x4ffc00000, data 0x2615f9f/0x26ff000, compress 0x0/0x0/0x0, omap 0x32db2, meta 0x607d24e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:15.055847+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1418819 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 2703360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f744b000/0x0/0x4ffc00000, data 0x2615f9f/0x26ff000, compress 0x0/0x0/0x0, omap 0x32db2, meta 0x607d24e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:16.055975+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123133952 unmapped: 2703360 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:17.056073+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 2457600 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:18.056157+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7406000/0x0/0x4ffc00000, data 0x265cf97/0x2746000, compress 0x0/0x0/0x0, omap 0x32f1a, meta 0x607d0e6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 2457600 heap: 125837312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:19.056356+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7406000/0x0/0x4ffc00000, data 0x265cf97/0x2746000, compress 0x0/0x0/0x0, omap 0x32faa, meta 0x607d056), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 3407872 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:20.056549+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795921326s of 10.001452446s, submitted: 58
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420993 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123559936 unmapped: 3325952 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:21.056729+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123559936 unmapped: 3325952 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:22.056880+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123559936 unmapped: 3325952 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:23.057024+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f73bf000/0x0/0x4ffc00000, data 0x26a4dd1/0x278d000, compress 0x0/0x0/0x0, omap 0x3330a, meta 0x607ccf6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123764736 unmapped: 3121152 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f73bf000/0x0/0x4ffc00000, data 0x26a4dd1/0x278d000, compress 0x0/0x0/0x0, omap 0x3330a, meta 0x607ccf6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:24.057216+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f737c000/0x0/0x4ffc00000, data 0x26e7305/0x27d0000, compress 0x0/0x0/0x0, omap 0x33472, meta 0x607cb8e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 3112960 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:25.057350+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424993 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 3112960 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:26.058163+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 1892352 heap: 126885888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:27.058492+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 2859008 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:28.058823+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7346000/0x0/0x4ffc00000, data 0x271c659/0x2805000, compress 0x0/0x0/0x0, omap 0x33742, meta 0x607c8be), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 2859008 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:29.059953+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 2613248 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:30.060188+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.661027908s of 10.004848480s, submitted: 51
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428189 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 2613248 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:31.061008+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 2613248 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f7308000/0x0/0x4ffc00000, data 0x275b5c9/0x2844000, compress 0x0/0x0/0x0, omap 0x338f2, meta 0x607c70e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:32.061251+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 124747776 unmapped: 3186688 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:33.061821+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 124755968 unmapped: 3178496 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:34.062294+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 124755968 unmapped: 3178496 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:35.062482+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435079 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 1892352 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:36.062988+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126132224 unmapped: 1802240 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:37.065262+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f72a7000/0x0/0x4ffc00000, data 0x27ba3c3/0x28a5000, compress 0x0/0x0/0x0, omap 0x34002, meta 0x607bffe), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 1556480 heap: 127934464 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:38.065664+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 2195456 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:39.065997+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 2342912 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:40.066131+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.198531151s of 10.005724907s, submitted: 73
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435743 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f7252000/0x0/0x4ffc00000, data 0x281090d/0x28fa000, compress 0x0/0x0/0x0, omap 0x344ca, meta 0x607bb36), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 2342912 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:41.066481+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 3637248 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:42.066681+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f721a000/0x0/0x4ffc00000, data 0x2845143/0x2930000, compress 0x0/0x0/0x0, omap 0x34882, meta 0x607b77e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 3989504 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:43.066870+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 3883008 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:44.067008+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 2203648 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:45.067328+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451201 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 2187264 heap: 128983040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:46.067677+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f71b0000/0x0/0x4ffc00000, data 0x28b0230/0x299c000, compress 0x0/0x0/0x0, omap 0x34cf5, meta 0x607b30b), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127057920 unmapped: 2973696 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:47.068078+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f7184000/0x0/0x4ffc00000, data 0x28db540/0x29c8000, compress 0x0/0x0/0x0, omap 0x34e5d, meta 0x607b1a3), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127533056 unmapped: 2498560 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:48.068308+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 2899968 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:49.068520+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 2899968 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:50.068735+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457271 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 2678784 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:51.068971+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.776168823s of 11.002607346s, submitted: 123
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127713280 unmapped: 2318336 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:52.069104+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 2072576 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f7102000/0x0/0x4ffc00000, data 0x2958663/0x2a48000, compress 0x0/0x0/0x0, omap 0x3576a, meta 0x607a896), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:53.069283+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 1957888 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:54.069449+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 1957888 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:55.069679+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462561 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 1957888 heap: 130031616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x297db39/0x2a6e000, compress 0x0/0x0/0x0, omap 0x35b12, meta 0x607a4ee), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:56.069866+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129220608 unmapped: 1859584 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:57.070138+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 1744896 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:58.070340+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 1744896 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:59.070523+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129482752 unmapped: 1597440 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:00.070678+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468077 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129482752 unmapped: 1597440 heap: 131080192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:01.070849+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f70a8000/0x0/0x4ffc00000, data 0x29b123c/0x2aa2000, compress 0x0/0x0/0x0, omap 0x35eca, meta 0x607a136), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.671961784s of 10.488462448s, submitted: 73
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130539520 unmapped: 1589248 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:02.071048+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130539520 unmapped: 1589248 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:03.071203+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 2179072 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:04.071364+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 2146304 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:05.071496+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472535 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f7072000/0x0/0x4ffc00000, data 0x29e7673/0x2ada000, compress 0x0/0x0/0x0, omap 0x363e7, meta 0x6079c19), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 2072576 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:06.071597+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f7072000/0x0/0x4ffc00000, data 0x29e7673/0x2ada000, compress 0x0/0x0/0x0, omap 0x363e7, meta 0x6079c19), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 2072576 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:07.071787+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f706d000/0x0/0x4ffc00000, data 0x29ecb9d/0x2adf000, compress 0x0/0x0/0x0, omap 0x363e7, meta 0x6079c19), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 2072576 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:08.071900+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f706d000/0x0/0x4ffc00000, data 0x29ecb9d/0x2adf000, compress 0x0/0x0/0x0, omap 0x35e99, meta 0x607a167), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130097152 unmapped: 2031616 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:09.072020+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 2187264 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:10.072197+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475267 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 2179072 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:11.072336+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.485389709s of 10.165213585s, submitted: 43
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130097152 unmapped: 2031616 heap: 132128768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:12.072468+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 3047424 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:13.072656+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 157 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7013000/0x0/0x4ffc00000, data 0x2a43008/0x2b37000, compress 0x0/0x0/0x0, omap 0x363b4, meta 0x6079c4c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 2924544 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:14.072819+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ff5000/0x0/0x4ffc00000, data 0x2a5f3d3/0x2b53000, compress 0x0/0x0/0x0, omap 0x36822, meta 0x60797de), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ff5000/0x0/0x4ffc00000, data 0x2a5f3d3/0x2b53000, compress 0x0/0x0/0x0, omap 0x36822, meta 0x60797de), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 3203072 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:15.072966+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482555 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 3203072 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:16.073137+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 3194880 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:17.073374+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ff5000/0x0/0x4ffc00000, data 0x2a63149/0x2b57000, compress 0x0/0x0/0x0, omap 0x36a13, meta 0x60795ed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 3194880 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:18.073659+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ff5000/0x0/0x4ffc00000, data 0x2a63149/0x2b57000, compress 0x0/0x0/0x0, omap 0x36a13, meta 0x60795ed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 3186688 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:19.073859+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6ff5000/0x0/0x4ffc00000, data 0x2a63149/0x2b57000, compress 0x0/0x0/0x0, omap 0x36a13, meta 0x60795ed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129998848 unmapped: 3178496 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:20.073986+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485817 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129998848 unmapped: 3178496 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:21.074151+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.955572128s of 10.002736092s, submitted: 82
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129638400 unmapped: 3538944 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:22.074368+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 3530752 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:23.074606+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:24.074758+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fca000/0x0/0x4ffc00000, data 0x2a89afd/0x2b80000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fca000/0x0/0x4ffc00000, data 0x2a89afd/0x2b80000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:25.074959+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489267 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:26.075153+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:27.075415+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:28.075641+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fca000/0x0/0x4ffc00000, data 0x2a89afd/0x2b80000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fca000/0x0/0x4ffc00000, data 0x2a89afd/0x2b80000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:29.075791+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:30.076460+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489267 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 3383296 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:31.076732+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:32.077080+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:33.077936+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:34.078813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:35.079189+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490395 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:36.079530+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:37.079885+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:38.080470+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:39.081054+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:40.081639+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130842624 unmapped: 2334720 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490395 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:41.082099+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:42.082521+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:43.082753+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:44.083165+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:45.083353+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490395 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:46.083506+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:47.083847+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:48.083997+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:49.084157+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:50.084379+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490395 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:51.084643+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:52.084754+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:53.084912+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:54.085137+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:55.085602+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490395 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:56.085889+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130850816 unmapped: 2326528 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:57.086073+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 2318336 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:58.086280+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 2318336 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:59.086435+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 2318336 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.706695557s of 37.742534637s, submitted: 20
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fc0000/0x0/0x4ffc00000, data 0x2a93e26/0x2b8a000, compress 0x0/0x0/0x0, omap 0x370bc, meta 0x6078f44), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:00.086646+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 2318336 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1491299 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:01.086855+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6fa9000/0x0/0x4ffc00000, data 0x2aacb2b/0x2ba3000, compress 0x0/0x0/0x0, omap 0x3721f, meta 0x6078de1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 2318336 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:02.087027+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 2318336 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:03.087203+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 2310144 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:04.087375+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 2187264 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:05.087525+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 130990080 unmapped: 2187264 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492339 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:06.087710+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 2154496 heap: 133177344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6f8a000/0x0/0x4ffc00000, data 0x2acb9f9/0x2bc2000, compress 0x0/0x0/0x0, omap 0x3752c, meta 0x6078ad4), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:07.087856+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 3031040 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6f5f000/0x0/0x4ffc00000, data 0x2af6fe4/0x2bed000, compress 0x0/0x0/0x0, omap 0x37c99, meta 0x6078367), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:08.087962+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 2932736 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6f46000/0x0/0x4ffc00000, data 0x2b0fce9/0x2c06000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:09.088073+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 2719744 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:10.088198+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.073197365s of 10.625436783s, submitted: 29
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 2719744 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497019 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:11.088336+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131506176 unmapped: 2719744 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:12.088432+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 2932736 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:13.088645+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 2924544 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x2b3ecf4/0x2c36000, compress 0x0/0x0/0x0, omap 0x38119, meta 0x6077ee7), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:14.088771+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 2924544 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:15.088892+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131334144 unmapped: 2891776 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1498383 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:16.089193+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 131334144 unmapped: 2891776 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x2b3ecf4/0x2c36000, compress 0x0/0x0/0x0, omap 0x381f1, meta 0x6077e0f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:17.089415+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 1417216 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6eb8000/0x0/0x4ffc00000, data 0x2b9c423/0x2c94000, compress 0x0/0x0/0x0, omap 0x383e9, meta 0x6077c17), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:18.089548+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 1228800 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:19.089761+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 1228800 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6eb9000/0x0/0x4ffc00000, data 0x2b9c452/0x2c93000, compress 0x0/0x0/0x0, omap 0x38479, meta 0x6077b87), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:20.089912+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.935764313s of 10.002146721s, submitted: 26
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 1228800 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501161 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:21.090017+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133120000 unmapped: 1105920 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e9a000/0x0/0x4ffc00000, data 0x2bbb4db/0x2cb2000, compress 0x0/0x0/0x0, omap 0x38701, meta 0x60778ff), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:22.090129+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 1097728 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:23.090311+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 1097728 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:24.090473+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e78000/0x0/0x4ffc00000, data 0x2bddaf5/0x2cd4000, compress 0x0/0x0/0x0, omap 0x38398, meta 0x6077c68), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133251072 unmapped: 974848 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:25.090630+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 958464 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502901 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:26.090765+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 958464 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:27.091730+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133210112 unmapped: 2064384 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:28.091877+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 2056192 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e2f000/0x0/0x4ffc00000, data 0x2c26a40/0x2d1d000, compress 0x0/0x0/0x0, omap 0x386ec, meta 0x6077914), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:29.092074+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 2056192 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e2f000/0x0/0x4ffc00000, data 0x2c26b0a/0x2d1d000, compress 0x0/0x0/0x0, omap 0x3877a, meta 0x6077886), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:30.092240+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.593827248s of 10.002040863s, submitted: 33
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 1900544 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503721 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:31.092436+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 1900544 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:32.092641+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 1892352 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e2f000/0x0/0x4ffc00000, data 0x2c26fb8/0x2d1d000, compress 0x0/0x0/0x0, omap 0x3884f, meta 0x60777b1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:33.092806+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 1892352 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:34.092972+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 1892352 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:35.093133+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 1892352 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504009 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:36.093674+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 1892352 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:37.094148+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134430720 unmapped: 1892352 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:38.094618+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134430720 unmapped: 1892352 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e0c000/0x0/0x4ffc00000, data 0x2c4a422/0x2d40000, compress 0x0/0x0/0x0, omap 0x3884f, meta 0x60777b1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:39.095302+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 1744896 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:40.095680+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 1744896 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507221 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:41.095923+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 1744896 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:42.096268+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 1744896 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6e0c000/0x0/0x4ffc00000, data 0x2c4a422/0x2d40000, compress 0x0/0x0/0x0, omap 0x3884f, meta 0x60777b1), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.133184433s of 12.788790703s, submitted: 9
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:43.096460+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 1671168 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:44.096817+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 1671168 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:45.097141+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 1671168 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507141 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:46.097497+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 1671168 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:47.097966+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 1572864 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6dd7000/0x0/0x4ffc00000, data 0x2c7e552/0x2d75000, compress 0x0/0x0/0x0, omap 0x389b2, meta 0x607764e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:48.098187+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 1572864 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:49.098348+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6dd7000/0x0/0x4ffc00000, data 0x2c7e552/0x2d75000, compress 0x0/0x0/0x0, omap 0x38a87, meta 0x6077579), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 1572864 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:50.098597+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562355d0c000
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 1507328 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 18
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513229 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: handle_auth_request added challenge on 0x562355d0c400
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:51.098784+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 1441792 heap: 136323072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:52.099035+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d69000/0x0/0x4ffc00000, data 0x2cea4ef/0x2de3000, compress 0x0/0x0/0x0, omap 0x38ddb, meta 0x6077225), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.947562218s of 10.048245430s, submitted: 46
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:53.099268+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 19
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:54.099483+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:55.099678+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515423 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:56.099904+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:57.100111+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:58.100232+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x38ef7, meta 0x6077109), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:59.100399+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:00.100522+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:01.100866+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:02.101030+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:03.101171+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:04.101326+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:05.101478+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:06.101616+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:07.101792+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:08.101961+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:09.102110+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:10.102302+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:11.102675+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:12.102813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:13.102955+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:14.103135+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:15.103322+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:16.103456+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:17.103631+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:18.103798+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:19.103962+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:20.104136+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:21.104282+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:22.104418+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:23.352095+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:24.352204+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:25.352349+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:26.352946+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:27.353073+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:28.353179+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:29.353273+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:30.353477+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x39013, meta 0x6076fed), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:31.353623+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:32.353722+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:33.353863+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.095912933s of 41.175083160s, submitted: 9
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:34.354016+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:35.354171+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d64000/0x0/0x4ffc00000, data 0x2cefc49/0x2de8000, compress 0x0/0x0/0x0, omap 0x390a1, meta 0x6076f5f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515967 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:36.354330+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:37.354611+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:38.354760+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:39.354896+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 2990080 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:40.355069+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 2899968 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d3a000/0x0/0x4ffc00000, data 0x2d197c0/0x2e12000, compress 0x0/0x0/0x0, omap 0x39204, meta 0x6076dfc), peers [0,2] op hist [0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519987 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:41.355209+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134488064 unmapped: 2883584 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:42.355336+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 2818048 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:43.355460+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 2818048 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 20
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:44.355619+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 2719744 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:45.355749+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 2719744 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.935233116s of 12.002580643s, submitted: 14
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519515 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6d23000/0x0/0x4ffc00000, data 0x2d303be/0x2e29000, compress 0x0/0x0/0x0, omap 0x39367, meta 0x6076c99), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:46.355885+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 2719744 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:47.356023+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 2662400 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:48.356154+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 2662400 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:49.356249+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 2662400 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 21
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6cda000/0x0/0x4ffc00000, data 0x2d79628/0x2e72000, compress 0x0/0x0/0x0, omap 0x39511, meta 0x6076aef), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:50.356402+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522693 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:51.356602+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:52.356750+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:53.357023+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:54.357250+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:55.357448+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6cda000/0x0/0x4ffc00000, data 0x2d79b05/0x2e72000, compress 0x0/0x0/0x0, omap 0x39674, meta 0x607698c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521693 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:56.357592+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 2654208 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.566743851s of 10.616423607s, submitted: 18
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6cda000/0x0/0x4ffc00000, data 0x2d79b05/0x2e72000, compress 0x0/0x0/0x0, omap 0x39674, meta 0x607698c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:57.357774+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 1605632 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:58.357985+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 1572864 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c98000/0x0/0x4ffc00000, data 0x2dbc4ad/0x2eb4000, compress 0x0/0x0/0x0, omap 0x397d7, meta 0x6076829), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:59.358141+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c98000/0x0/0x4ffc00000, data 0x2dbc4ad/0x2eb4000, compress 0x0/0x0/0x0, omap 0x397d7, meta 0x6076829), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 1572864 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:00.358256+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135970816 unmapped: 1400832 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528101 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c98000/0x0/0x4ffc00000, data 0x2dbc4ad/0x2eb4000, compress 0x0/0x0/0x0, omap 0x397d7, meta 0x6076829), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:01.358426+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 1187840 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:02.358626+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 1187840 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:03.358739+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c81000/0x0/0x4ffc00000, data 0x2dd2f91/0x2ecb000, compress 0x0/0x0/0x0, omap 0x398ac, meta 0x6076754), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 1187840 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:04.358871+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:05.358988+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524717 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:06.359148+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.502129555s of 10.000110626s, submitted: 18
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:07.359339+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c74000/0x0/0x4ffc00000, data 0x2de0982/0x2ed8000, compress 0x0/0x0/0x0, omap 0x39c00, meta 0x6076400), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:08.359483+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:09.359677+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:10.359814+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c74000/0x0/0x4ffc00000, data 0x2de0a4c/0x2ed8000, compress 0x0/0x0/0x0, omap 0x39daa, meta 0x6076256), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524733 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:11.359890+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 1368064 heap: 137371648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 ms_handle_reset con 0x562355d0c000 session 0x562355efa000
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 ms_handle_reset con 0x562355d0c400 session 0x5623558da700
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:12.360041+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:13.360191+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 22
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:14.360358+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c74000/0x0/0x4ffc00000, data 0x2de0a4c/0x2ed8000, compress 0x0/0x0/0x0, omap 0x39daa, meta 0x6076256), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:15.360633+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524429 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:16.360793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.227948189s of 10.390320778s, submitted: 192
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:17.360975+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:18.361106+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:19.361247+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 2146304 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:20.361384+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c74000/0x0/0x4ffc00000, data 0x2de0a4c/0x2ed8000, compress 0x0/0x0/0x0, omap 0x3a2a8, meta 0x6075d58), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524589 data_alloc: 218103808 data_used: 13899
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:21.361525+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:22.361690+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:23.361816+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:24.361939+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:25.362077+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:26.362251+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523983 data_alloc: 218103808 data_used: 13903
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c75000/0x0/0x4ffc00000, data 0x2de0ae4/0x2ed7000, compress 0x0/0x0/0x0, omap 0x3a834, meta 0x60757cc), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:27.362382+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104224205s of 10.885497093s, submitted: 15
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:28.362497+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:29.362621+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c76000/0x0/0x4ffc00000, data 0x2de0b13/0x2ed6000, compress 0x0/0x0/0x0, omap 0x3a8c2, meta 0x607573e), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:30.362739+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f6c76000/0x0/0x4ffc00000, data 0x2de0b13/0x2ed6000, compress 0x0/0x0/0x0, omap 0x3aafa, meta 0x6075506), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:31.362952+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523121 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _renew_subs
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:32.363127+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:33.363291+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6c71000/0x0/0x4ffc00000, data 0x2de27e2/0x2ed9000, compress 0x0/0x0/0x0, omap 0x3aeff, meta 0x6075101), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:34.363419+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:35.363624+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6c71000/0x0/0x4ffc00000, data 0x2de27e2/0x2ed9000, compress 0x0/0x0/0x0, omap 0x3afd4, meta 0x607502c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:36.363838+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526775 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:37.364082+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:38.364250+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.191946030s of 10.363080025s, submitted: 37
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:39.364421+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:40.364635+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:41.364764+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529533 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 2138112 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:42.364891+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:43.365127+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:44.365294+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:45.365466+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:46.365604+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529533 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:47.365974+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:48.366107+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:49.366267+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:50.366398+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:51.366635+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529533 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:52.366808+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:53.367002+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:54.367220+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:55.367374+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:56.367509+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529533 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:57.367745+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:58.367948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:59.368115+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:00.368315+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:01.368817+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529533 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:02.368964+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:03.369128+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:04.369274+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:05.369486+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:06.369688+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529533 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4261/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b2a4, meta 0x6074d5c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:07.369889+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.917907715s of 29.927753448s, submitted: 58
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:08.370035+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:09.370195+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:10.370386+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 2129920 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6f000/0x0/0x4ffc00000, data 0x2de42fc/0x2edd000, compress 0x0/0x0/0x0, omap 0x3b407, meta 0x6074bf9), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:11.370533+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530505 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:12.370792+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:13.371173+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6f000/0x0/0x4ffc00000, data 0x2de42fc/0x2edd000, compress 0x0/0x0/0x0, omap 0x3b4dc, meta 0x6074b24), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:14.371300+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:15.371470+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:16.371618+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530217 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6f000/0x0/0x4ffc00000, data 0x2de42fc/0x2edd000, compress 0x0/0x0/0x0, omap 0x3b56a, meta 0x6074a96), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:17.371836+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:18.371994+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:19.372161+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x2de4397/0x2ede000, compress 0x0/0x0/0x0, omap 0x3b75b, meta 0x60748a5), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.731336594s of 11.752409935s, submitted: 8
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:20.372313+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:21.372444+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530761 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6f000/0x0/0x4ffc00000, data 0x2de432b/0x2edc000, compress 0x0/0x0/0x0, omap 0x3b7e9, meta 0x6074817), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:22.372635+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c6f000/0x0/0x4ffc00000, data 0x2de432b/0x2edc000, compress 0x0/0x0/0x0, omap 0x3ba21, meta 0x60745df), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:23.372846+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:24.373025+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6c70000/0x0/0x4ffc00000, data 0x2de432b/0x2edc000, compress 0x0/0x0/0x0, omap 0x3baaf, meta 0x6074551), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:25.373150+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:26.373259+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 rsyslogd[1002]: imjournal from <np0005590528:ceph-osd>: begin to drop messages due to rate-limiting
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529787 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 2121728 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:27.373467+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:28.373654+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:29.373796+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:30.373922+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6c6b000/0x0/0x4ffc00000, data 0x2de5ffa/0x2edf000, compress 0x0/0x0/0x0, omap 0x3be4a, meta 0x60741b6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6c6b000/0x0/0x4ffc00000, data 0x2de5ffa/0x2edf000, compress 0x0/0x0/0x0, omap 0x3be4a, meta 0x60741b6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:31.374102+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533121 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:32.374269+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:33.374433+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:34.374626+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 2105344 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:35.375118+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.392614365s of 15.530655861s, submitted: 39
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6c6b000/0x0/0x4ffc00000, data 0x2de5ffa/0x2edf000, compress 0x0/0x0/0x0, omap 0x3be4a, meta 0x60741b6), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:36.375284+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535895 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:37.375450+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:38.375627+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:39.375762+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:40.375937+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:41.376144+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535895 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:42.378133+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:43.378276+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:44.378471+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:45.378692+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:46.378918+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535895 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:47.379177+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:48.379346+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:49.379611+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:50.379862+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:51.379992+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535895 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:52.380181+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:53.380325+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:54.380641+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:55.380796+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:56.380995+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535895 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:57.381241+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.787818909s of 21.817119598s, submitted: 13
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c1cf, meta 0x6073e31), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:58.381450+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136331264 unmapped: 2088960 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:59.381694+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136339456 unmapped: 2080768 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:00.381870+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136339456 unmapped: 2080768 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:01.382085+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535319 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c6a000/0x0/0x4ffc00000, data 0x2de7a79/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3c407, meta 0x6073bf9), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:02.382242+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:03.382398+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c69000/0x0/0x4ffc00000, data 0x2de7b14/0x2ee3000, compress 0x0/0x0/0x0, omap 0x3c407, meta 0x6073bf9), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:04.382586+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c69000/0x0/0x4ffc00000, data 0x2de7b14/0x2ee3000, compress 0x0/0x0/0x0, omap 0x3c407, meta 0x6073bf9), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:05.382786+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:06.382976+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536469 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c69000/0x0/0x4ffc00000, data 0x2de7b14/0x2ee3000, compress 0x0/0x0/0x0, omap 0x3c5f8, meta 0x6073a08), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:07.383146+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.507697105s of 10.742763519s, submitted: 10
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:08.383330+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:09.383463+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:10.383605+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f6c69000/0x0/0x4ffc00000, data 0x2de7c0d/0x2ee3000, compress 0x0/0x0/0x0, omap 0x3ca21, meta 0x60735df), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:11.383793+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1538017 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:12.383967+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 2072576 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f6c68000/0x0/0x4ffc00000, data 0x2de7ca8/0x2ee4000, compress 0x0/0x0/0x0, omap 0x3cce7, meta 0x6073319), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:13.384323+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 2064384 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f6c63000/0x0/0x4ffc00000, data 0x2de98ad/0x2ee7000, compress 0x0/0x0/0x0, omap 0x3cf20, meta 0x60730e0), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:14.384429+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 2064384 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:15.384578+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 2064384 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:16.384713+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541351 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 2064384 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:17.384930+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 2064384 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:18.385124+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 2056192 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:19.385273+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 2056192 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.598173141s of 12.001996040s, submitted: 34
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f6c63000/0x0/0x4ffc00000, data 0x2de98ad/0x2ee7000, compress 0x0/0x0/0x0, omap 0x3cfae, meta 0x6073052), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:20.385462+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 2048000 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:21.385592+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545673 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 2048000 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:22.385739+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 2048000 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:23.386040+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 2048000 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:24.386221+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f6c62000/0x0/0x4ffc00000, data 0x2deb3f6/0x2eea000, compress 0x0/0x0/0x0, omap 0x3d5b4, meta 0x6072a4c), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 2048000 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:25.386358+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:26.386502+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543789 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 2670592 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:27.386691+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f6c63000/0x0/0x4ffc00000, data 0x2deb425/0x2ee9000, compress 0x0/0x0/0x0, omap 0x3d8c1, meta 0x607273f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:28.386813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:29.386932+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:30.387046+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:31.387168+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.838737488s of 11.591191292s, submitted: 65
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1547267 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:32.387309+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6c5e000/0x0/0x4ffc00000, data 0x2ded02a/0x2eec000, compress 0x0/0x0/0x0, omap 0x3dbd0, meta 0x6072430), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:33.387729+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:34.387875+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:35.388080+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 2654208 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:36.388232+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549897 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:37.388398+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6c5b000/0x0/0x4ffc00000, data 0x2deeaa9/0x2eef000, compress 0x0/0x0/0x0, omap 0x3df59, meta 0x60720a7), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:38.388614+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:39.388762+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:40.388884+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:41.389053+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549897 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:42.389217+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.386541367s of 11.412962914s, submitted: 16
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:43.389351+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6c5b000/0x0/0x4ffc00000, data 0x2deeaa9/0x2eef000, compress 0x0/0x0/0x0, omap 0x3e191, meta 0x6071e6f), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:44.389488+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:45.389625+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:46.389761+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551013 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:47.389951+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:48.390085+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:49.390217+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6c5c000/0x0/0x4ffc00000, data 0x2deec0e/0x2ef0000, compress 0x0/0x0/0x0, omap 0x3e457, meta 0x6071ba9), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:50.390417+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:51.390961+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1550885 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 2646016 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:52.391120+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:53.391254+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.699297905s of 10.323380470s, submitted: 39
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:54.391386+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:55.391511+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6c58000/0x0/0x4ffc00000, data 0x2df0842/0x2ef2000, compress 0x0/0x0/0x0, omap 0x3eb02, meta 0x60714fe), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:56.391665+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553789 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:57.391844+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:58.392691+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:59.392821+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:00.392938+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:01.393080+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6c58000/0x0/0x4ffc00000, data 0x2df0842/0x2ef2000, compress 0x0/0x0/0x0, omap 0x3eb02, meta 0x60714fe), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553789 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:02.393207+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4725 syncs, 3.15 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4491 writes, 14K keys, 4491 commit groups, 1.0 writes per commit group, ingest: 21.35 MB, 0.04 MB/s
                                           Interval WAL: 4491 writes, 1896 syncs, 2.37 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6c58000/0x0/0x4ffc00000, data 0x2df0842/0x2ef2000, compress 0x0/0x0/0x0, omap 0x3eb02, meta 0x60714fe), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:03.393361+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 2629632 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:04.393513+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.900849342s of 11.217520714s, submitted: 1
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:05.393649+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:06.393804+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:07.394044+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:08.394164+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:09.394302+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:10.394416+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:11.394548+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:12.394700+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:13.394826+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 2621440 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:14.394948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:15.395097+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:16.395224+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:17.395360+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:18.395507+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:19.395632+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:20.395755+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:21.395921+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:22.396033+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:23.396172+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:24.396302+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:25.396438+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:26.396564+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:27.396724+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:28.397005+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:29.397220+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135806976 unmapped: 2613248 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:30.397376+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:31.397582+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:32.397779+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:33.397992+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:34.398116+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:35.398250+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:36.398624+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:37.398813+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:38.398948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:39.399063+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:40.399188+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:41.399322+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:42.399466+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:43.399626+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:44.399747+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:45.399934+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:46.400117+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:47.400314+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:48.400460+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:49.400542+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:50.400702+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:51.400859+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:52.401036+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:53.401208+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:54.401351+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 2605056 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:55.401488+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 2596864 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:56.401607+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 2596864 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556403 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:57.401749+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 2596864 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:58.401866+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 2596864 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:59.402024+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 2596864 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.401000977s of 55.411827087s, submitted: 14
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c55000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:00.402135+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135831552 unmapped: 2588672 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:01.402254+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 2490368 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555683 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:02.402615+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:03.402706+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:04.402849+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:05.402974+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c57000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:06.403118+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c57000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555683 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:07.403279+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:08.403405+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:09.403544+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:10.403660+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c57000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.702313423s of 10.892474174s, submitted: 90
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:11.403815+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 2424832 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c57000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3ee8e, meta 0x6071172), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555827 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:12.403948+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 ms_handle_reset con 0x5623559f0c00 session 0x562352d75c00
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 1867776 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Got map version 23
Jan 21 14:26:49 compute-0 ceph-osd[86795]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:13.404130+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 1835008 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f6c57000/0x0/0x4ffc00000, data 0x2df22c1/0x2ef5000, compress 0x0/0x0/0x0, omap 0x3f07f, meta 0x6070f81), peers [0,2] op hist [])
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:14.404249+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 1835008 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:15.404414+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 1835008 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:16.405285+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 136667136 unmapped: 1753088 heap: 138420224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'config diff' '{prefix=config diff}'
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'config show' '{prefix=config show}'
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'counter dump' '{prefix=counter dump}'
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'counter schema' '{prefix=counter schema}'
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:49 compute-0 ceph-osd[86795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:49 compute-0 ceph-osd[86795]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555827 data_alloc: 218103808 data_used: 13748
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:17.405433+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 137011200 unmapped: 2457600 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: tick
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_tickets
Jan 21 14:26:49 compute-0 ceph-osd[86795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:18.405684+0000)
Jan 21 14:26:49 compute-0 ceph-osd[86795]: prioritycache tune_memory target: 4294967296 mapped: 137043968 unmapped: 2424832 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:49 compute-0 ceph-osd[86795]: do_command 'log dump' '{prefix=log dump}'
Jan 21 14:26:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 21 14:26:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446196552' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 21 14:26:49 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1730099264' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: from='client.14636 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: pgmap v1426: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:49 compute-0 ceph-mon[75031]: from='client.14640 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1482148430' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2446196552' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 21 14:26:49 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1730099264' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 21 14:26:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166235699' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 21 14:26:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823339273' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 21 14:26:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876238749' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] _maybe_adjust
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662106431694153 of space, bias 1.0, pg target 0.19986319295082458 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006057356161058062 of space, bias 4.0, pg target 0.7268827393269675 quantized to 16 (current 16)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 21 14:26:50 compute-0 ceph-mgr[75322]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 21 14:26:50 compute-0 ceph-mon[75031]: from='client.14644 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/166235699' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3823339273' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2876238749' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 21 14:26:50 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 21 14:26:50 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396434496' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 21 14:26:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 21 14:26:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550127720' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 21 14:26:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 21 14:26:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922064714' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 21 14:26:51 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 21 14:26:51 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451733364' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 21 14:26:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433541666' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 21 14:26:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698848496' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: pgmap v1427: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2396434496' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2550127720' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3922064714' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2451733364' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 21 14:26:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510320955' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 21 14:26:52 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003765331' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 21 14:26:52 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:24.951776+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:25.951905+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:26.952055+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:27.952195+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:28.952325+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:29.952494+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:30.952611+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:31.952791+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:32.952915+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:33.953063+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 638976 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:34.953220+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:35.953330+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:36.953452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:37.953603+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 622592 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:38.953717+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:39.953880+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:40.954081+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 606208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:41.954296+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 606208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:42.954420+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 606208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:43.954533+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 598016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:44.954617+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 598016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:45.954764+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 589824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:46.954973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 589824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:47.955153+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 581632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:48.955265+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 581632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:49.955396+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 581632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:50.955618+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 573440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:51.956646+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 573440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:52.956768+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 573440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:53.956920+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 565248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:54.957081+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 565248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:55.957274+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 557056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:56.957429+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 557056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:57.957605+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 557056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:58.957789+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 548864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:53:59.957928+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 548864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:00.958067+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 548864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:01.958315+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 532480 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:02.958462+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 532480 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:03.958625+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 524288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:04.958769+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 524288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:05.958900+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 524288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:06.959023+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 516096 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:07.959195+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 516096 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:08.959328+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 507904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:09.959441+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 507904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:10.959615+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 499712 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:11.959788+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 499712 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:12.959937+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 499712 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:13.960133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 491520 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:14.960254+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 491520 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:15.960461+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 483328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:16.960980+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 475136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:17.961115+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 475136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:18.961601+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 466944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:19.961746+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 466944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:20.962901+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 458752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:21.963517+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 458752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:22.963633+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 458752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:23.963827+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 450560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:24.964137+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 450560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:25.964316+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 450560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:26.964614+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 442368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:27.964798+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 442368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:28.964939+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 434176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:29.965071+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 434176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:30.965249+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 425984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:31.965426+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 425984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:32.965582+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 425984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:33.965707+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 417792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:34.965962+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 417792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:35.966133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 409600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:36.966392+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 409600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:37.966521+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 409600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:38.966611+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 401408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:39.966764+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 401408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:40.966884+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 401408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:41.967062+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 385024 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:42.967189+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 385024 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:43.967347+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 376832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:44.967634+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 376832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:45.967753+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 376832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:46.967885+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 352256 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:47.968000+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 352256 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:48.968134+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 344064 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:49.968294+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 344064 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:50.968482+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 335872 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:51.968688+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 335872 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:52.968826+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 335872 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:53.968995+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 327680 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:54.969106+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 327680 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:55.969250+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 319488 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:56.969374+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 319488 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5620 writes, 24K keys, 5620 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5620 writes, 886 syncs, 6.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5620 writes, 24K keys, 5620 commit groups, 1.0 writes per commit group, ingest: 18.77 MB, 0.03 MB/s
                                           Interval WAL: 5620 writes, 886 syncs, 6.34 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:57.969504+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 262144 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:58.969635+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 253952 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:54:59.969783+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 253952 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:00.969913+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 253952 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:01.970056+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 245760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:02.970211+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 245760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:03.970331+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 237568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:04.970491+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 237568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:05.970642+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 237568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:06.970739+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 221184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:07.970881+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 221184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:08.971065+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:09.971229+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:10.971396+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:11.971607+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 204800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:12.971796+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 204800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:13.971951+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 204800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:14.972091+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 196608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:15.972238+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 196608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:16.972413+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 188416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:17.972627+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 188416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:18.972766+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 188416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:19.972929+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 180224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:20.973081+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 180224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:21.973268+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 180224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:22.973410+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 172032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:23.973625+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 172032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:24.973923+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 163840 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:25.974077+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 163840 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:26.974207+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 147456 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:27.974365+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 147456 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:28.974538+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 147456 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:29.974755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 139264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:30.974896+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 139264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:31.975088+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 139264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:32.975267+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 131072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:33.975439+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 131072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:34.975674+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 131072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:35.975825+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 122880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:36.975957+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 114688 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:37.976321+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 114688 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:38.976512+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 106496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:39.976650+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 106496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:40.976829+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 98304 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:41.977011+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 90112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:42.977142+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 81920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:43.977286+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 81920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:44.977464+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 81920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:45.977598+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 73728 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:46.977766+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 73728 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:47.977908+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 65536 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:48.978061+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 65536 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:49.978198+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 57344 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:50.978359+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 57344 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:51.978524+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 40960 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:52.978686+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 32768 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:53.978836+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 32768 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:54.979015+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 24576 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:55.979304+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 24576 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:56.979632+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 24576 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:57.979866+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:58.980068+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:55:59.980354+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 278.861663818s of 279.011077881s, submitted: 6
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 1007616 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:00.980515+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 761856 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:01.980729+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:02.980856+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:03.981048+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:04.981197+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:05.981369+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:06.981503+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:07.981631+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:08.981798+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 499712 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:09.981988+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 491520 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:10.982732+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 491520 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:11.982966+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 491520 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:12.983095+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 483328 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:13.983227+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 483328 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:14.983408+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 475136 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:15.983573+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 475136 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:16.983720+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 475136 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:17.983857+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 466944 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:18.984011+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 466944 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:19.984148+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 466944 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:20.984276+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 458752 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:21.984448+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 458752 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:22.984607+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 450560 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:23.984742+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 450560 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:24.984880+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 450560 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:25.985084+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 442368 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:26.985275+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 442368 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:27.985461+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 434176 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:28.985641+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 434176 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:29.985810+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 434176 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:30.985987+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 425984 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:31.986312+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 425984 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:32.986491+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 417792 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:33.986630+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 417792 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:34.986851+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 417792 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:35.987025+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 417792 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:36.987216+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 401408 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:37.987377+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:38.987527+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 393216 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:39.987678+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 385024 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:40.987815+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 385024 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:41.987977+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 376832 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:42.988146+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 376832 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:43.988332+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 376832 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:44.988504+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 368640 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:45.988664+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 368640 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:46.988881+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 360448 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:47.989041+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 360448 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:48.989232+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 360448 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:49.989500+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 352256 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:50.989652+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 352256 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:51.989848+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 335872 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:52.990176+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 335872 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:53.990473+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 335872 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:54.990656+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 327680 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:55.997146+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 327680 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:56.997443+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 319488 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:57.997709+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 319488 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:58.998018+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:56:59.998248+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:00.998444+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:01.998684+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:02.998939+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:03.999225+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:04.999442+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:05.999629+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:06.999828+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:08.000037+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:09.000227+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:10.000386+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:11.000699+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:12.000998+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:13.001220+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 311296 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:14.001416+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:15.001686+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:16.002101+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:17.002322+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:18.002523+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:19.002727+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:20.002910+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:21.003105+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:22.003383+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:23.003646+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:24.003838+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:25.004133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:26.004470+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:27.004728+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:28.004966+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:29.005196+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:30.005387+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 303104 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:31.005678+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 286720 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:32.006008+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 286720 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:33.006214+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 286720 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:34.006474+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 278528 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:35.006689+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 278528 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:36.006936+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 278528 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:37.007202+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 278528 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:38.007385+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 270336 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:39.007663+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 270336 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:40.007837+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 270336 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:41.008003+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:42.008237+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:43.008413+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:44.008696+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:45.008889+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:46.009162+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:47.009422+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:48.009630+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:49.009787+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:50.009932+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:51.010071+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:52.010253+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:53.010394+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:54.010522+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:55.010649+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:56.010789+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:57.010930+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:58.011102+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 262144 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:57:59.011246+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:00.011395+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:01.011676+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:02.012065+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:03.012323+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:04.012506+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:05.012681+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:06.012867+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:07.013036+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:08.013205+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:09.013368+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 253952 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:10.013594+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 245760 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:11.013862+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:12.014168+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:13.014371+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:14.014678+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:15.014878+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:16.015092+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:17.015297+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:18.015476+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:19.015714+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:20.015871+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:21.016062+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:22.016307+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:23.016429+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:24.016587+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:25.016718+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:26.016862+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:27.017011+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:28.017174+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:29.017303+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:30.017439+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:31.017617+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:32.017854+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:33.017974+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:34.018113+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:35.018273+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:36.018627+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:37.018938+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:38.019136+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:39.019273+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:40.019409+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:41.019570+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:42.019753+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:43.020067+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:44.020220+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:45.021187+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:46.022344+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:47.022494+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:48.022723+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:49.022928+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:50.023084+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:51.023260+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:52.023464+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:53.023634+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:54.023768+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:55.023879+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:56.024074+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:57.024270+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:58.024491+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:58:59.024720+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:00.024973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:01.025161+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:02.025362+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:03.025673+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:04.025898+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:05.026080+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:06.026250+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:07.026442+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:08.026630+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:09.027133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:10.027401+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 237568 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:11.027685+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:12.028078+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:13.028261+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:14.028546+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 229376 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:15.028825+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:16.029032+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:17.029314+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:18.029502+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:19.029675+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:20.029804+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:21.029951+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:22.030150+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:23.030301+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:24.030455+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:25.030644+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:26.030812+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 204800 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:27.030969+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:28.031165+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:29.031319+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:30.031456+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:31.031644+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:32.031848+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:33.031968+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:34.032167+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 196608 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:35.032330+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 180224 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:36.032595+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 180224 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:37.032791+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:38.032939+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:39.033064+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:40.033267+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:41.033460+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:42.033591+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:43.033752+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:44.033916+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:45.034066+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:46.034200+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:47.034423+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:48.034613+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:49.034835+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 172032 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:50.034998+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 163840 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:51.035169+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 163840 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:52.035399+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 155648 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:53.035597+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 155648 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:54.035768+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 155648 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:55.035895+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:56.036040+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:57.036224+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:58.036412+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T13:59:59.036571+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:00.036712+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:01.036857+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:02.037115+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:03.037260+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:04.037459+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 139264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc ms_handle_reset ms_handle_reset con 0x557eeea72000
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: get_auth_request con 0x557eee306800 auth_method 0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_configure stats_period=5
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:05.037672+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:06.037849+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:07.038067+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:08.038279+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:09.038535+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 ms_handle_reset con 0x557eedf19400 session 0x557eef5fc1c0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557ef08aec00
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:10.038798+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1064960 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:11.038942+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:12.039218+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:13.039390+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:14.039598+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:15.039738+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:16.039887+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:17.040549+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:18.040764+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:19.040945+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:20.041075+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:21.041235+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:22.041481+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:23.041647+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:24.041804+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:25.041957+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:26.042070+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:27.042190+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:28.042325+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:29.042669+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:30.042832+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:31.043011+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:32.043190+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:33.043365+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:34.043540+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:35.043719+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:36.043856+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1056768 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:37.043993+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:38.044146+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:39.044310+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:40.044473+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:41.044663+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:42.045397+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:43.045530+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:44.045706+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:45.045849+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:46.046048+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:47.046281+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:48.046456+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:49.046620+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:50.046765+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:51.046890+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:52.047032+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:53.047155+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:54.047284+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:55.047440+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:56.047599+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990667 data_alloc: 218103808 data_used: 4225
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:57.047755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 1032192 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:58.047865+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 1032192 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:00:59.048019+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 298.232849121s of 298.727874756s, submitted: 106
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557ef08af400
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:00.048156+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:01.048284+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:02.048496+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:03.048649+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:04.048826+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:05.048992+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:06.049164+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:07.049351+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:08.049628+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:09.049830+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:10.050011+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:11.050208+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 876544 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:12.050435+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:13.050586+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:14.050712+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:15.050884+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:16.051059+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:17.051238+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:18.051386+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:19.051508+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:20.051682+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:21.051820+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:22.052039+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:23.052166+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:24.052294+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:25.052438+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:26.052572+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 868352 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:27.052700+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:28.052832+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:29.052973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:30.053138+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:31.053320+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:32.053507+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:33.053668+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:34.053808+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:35.053941+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:36.054121+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:37.054313+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:38.054673+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:39.054886+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:40.055079+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:41.055307+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:42.055632+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:43.055802+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:44.055996+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:45.056212+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:46.056452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:47.056597+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:48.056788+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:49.056960+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:50.057179+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:51.057360+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:52.057591+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:53.057735+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:54.057864+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:55.060410+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:56.060532+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:57.060617+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:58.060758+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:01:59.060935+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:00.061070+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:01.061199+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:02.061363+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:03.061483+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:04.061650+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:05.061786+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:06.061914+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:07.062096+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:08.062224+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:09.062377+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:10.062600+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:11.062727+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:12.062910+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:13.063049+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:14.063235+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:15.063381+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:16.063522+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:17.063627+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:18.063773+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:19.063965+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:20.064129+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:21.064291+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:22.064481+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:23.064622+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:24.064760+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:25.064913+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:26.065060+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:27.065225+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:28.065431+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:29.065547+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:30.065675+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:31.065781+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:32.065952+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:33.066073+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:34.066343+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:35.066522+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:36.066648+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:37.066755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:38.066862+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 786432 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:39.066988+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:40.067160+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:41.067350+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:42.067633+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:43.067852+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:44.068000+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:45.068178+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:46.068313+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:47.068450+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:48.068588+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:49.068700+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:50.068847+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:51.068995+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:52.069178+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:53.069327+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:54.069477+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:55.069685+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 770048 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:56.069919+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 753664 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:57.070111+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 753664 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:58.070292+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 753664 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:02:59.070459+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:00.070611+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:01.070747+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:02.070960+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:03.071099+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:04.071293+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:05.071547+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:06.071804+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:07.071915+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:08.072058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:09.072188+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:10.072328+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:11.072479+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:12.072648+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:13.072780+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:14.072930+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:15.073160+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:16.073324+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:17.073489+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:18.073640+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:19.073774+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:20.073906+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:21.074075+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:22.074268+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:23.074450+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:24.074628+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:25.074750+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:26.074840+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:27.075000+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:28.075157+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:29.075299+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:30.075484+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:31.075626+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:32.075802+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:33.075919+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:34.076040+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:35.076106+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:36.076227+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:37.076317+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:38.076469+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:39.076636+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:40.076725+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:41.076834+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:42.077004+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:43.077160+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:44.077263+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:45.077483+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:46.077624+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:47.077778+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:48.077952+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:49.078132+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:50.078291+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:51.078433+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:52.078584+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:53.078734+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:54.078870+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:55.079043+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:56.079210+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:57.079387+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:58.079600+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:03:59.079758+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:00.079886+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:01.080043+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:02.080279+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:03.080486+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:04.080640+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:05.080876+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:06.081058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:07.081228+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread fragmentation_score=0.000118 took=0.000014s
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:08.081402+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:09.081616+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:10.081780+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:11.081911+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:12.082059+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:13.082227+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:14.082424+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:15.082617+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:16.082752+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:17.082945+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:18.083102+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:19.083293+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:20.083442+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:21.083639+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:22.083849+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:23.084132+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:24.084314+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:25.084492+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:26.084657+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:27.084820+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:28.084953+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:29.085157+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:30.085336+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:31.085537+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:32.085806+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:33.085960+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:34.086083+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:35.086283+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:36.086454+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:37.086677+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:38.086817+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:39.086989+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:40.087121+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:41.087275+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:42.087504+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:43.087706+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:44.087850+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:45.088029+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:46.088221+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:47.088398+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:48.088605+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:49.088811+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:50.088992+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:51.089148+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 802816 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:52.089345+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:53.089530+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:54.089802+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:55.090033+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:56.090181+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:57.090403+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5868 writes, 24K keys, 5868 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5868 writes, 1010 syncs, 5.81 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd35a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557eecd358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:58.090638+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:04:59.090799+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:00.090946+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:01.091129+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:02.091358+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:03.091536+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:04.091761+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:05.091944+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:06.092151+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 761856 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:07.092309+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 753664 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:08.092452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 753664 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:09.092648+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:10.092798+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 737280 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:11.093000+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:12.093215+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:13.093437+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:14.093636+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:15.093812+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:16.093994+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:17.094129+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:18.094291+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:19.094408+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:20.094590+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:21.094792+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:22.094973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:23.095165+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:24.095340+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:25.095649+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:26.095905+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:27.096199+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:28.096529+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:29.096854+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:30.097135+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:31.097454+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:32.097919+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:33.098171+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:34.098369+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:35.098619+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:36.098845+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:37.099035+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:38.099328+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:39.099977+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:40.100206+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:41.100439+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:42.100703+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:43.100915+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:44.101366+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:45.101606+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:46.101834+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:47.102048+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:48.102296+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:49.102544+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:50.102803+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:51.103056+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:52.103328+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:53.103540+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:54.103843+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:55.104075+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:56.104365+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:57.104615+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 704512 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:58.104925+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 704512 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:05:59.105256+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 704512 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.266296387s of 300.787506104s, submitted: 18
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:00.105471+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1441792 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:01.105643+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1425408 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:02.105877+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:03.106075+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:04.106311+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:05.106536+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:06.106948+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:07.107178+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:08.107420+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:09.107786+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:10.108076+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:11.108386+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:12.108925+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:13.109152+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:14.109453+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:15.109720+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:16.110030+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1392640 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:17.110306+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:18.110598+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:19.110780+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:20.110995+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:21.111265+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:22.111596+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:23.113861+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:24.114084+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:25.114265+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1384448 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:26.114521+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:27.115060+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:28.115298+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:29.115537+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:30.115748+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:31.115998+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:32.116368+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:33.116594+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:34.116875+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:35.117153+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:36.117451+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:37.117706+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:38.117874+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:39.118060+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:40.118261+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:41.118696+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:42.119037+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:43.119295+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:44.119604+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:45.119794+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:46.120020+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:47.120281+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:48.120660+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:49.120934+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:50.121204+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:51.121391+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:52.121731+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:53.122059+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:54.122283+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:55.122503+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:56.122771+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:57.123093+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:58.123403+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:06:59.123880+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:00.124078+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:01.124422+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:02.124783+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:03.124986+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:04.125201+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:05.125454+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:06.125731+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:07.125984+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:08.126202+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:09.126409+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:10.126664+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:11.126904+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:12.127199+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:13.127438+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:14.127714+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:15.127925+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:16.128136+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:17.128341+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:18.128631+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:19.128894+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:20.129165+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:21.129395+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:22.129686+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:23.129950+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:24.130184+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:25.130389+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:26.130637+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:27.130805+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:28.131008+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:29.131169+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:30.131360+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:31.131535+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:32.131761+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:33.131903+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:34.132050+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:35.132181+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:36.132356+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:37.132501+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:38.132695+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:39.132896+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:40.133070+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:41.133263+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:42.133452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:43.133638+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:44.133811+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:45.133964+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:46.134150+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:47.134281+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:48.134707+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:49.134874+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:50.135048+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:51.135198+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:52.135438+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:53.135527+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:54.135661+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:55.135788+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:56.135949+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:57.136070+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:58.136207+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:07:59.136368+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:00.136497+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:01.136709+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:02.136946+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:03.137104+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:04.137280+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:05.137465+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:06.137694+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:07.137844+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:08.138066+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:09.138262+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:10.138496+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:11.138711+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:12.138905+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:13.139079+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:14.139651+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:15.139789+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:16.139991+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:17.140153+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:18.140299+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:19.140441+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:20.140649+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:21.140825+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:22.141007+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:23.141184+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:24.141363+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:25.141526+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:26.141724+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:27.141932+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:28.142189+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:29.142307+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:30.142447+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:31.142615+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:32.142807+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:33.142918+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:34.143069+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:35.143225+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:36.143409+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1359872 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:37.143654+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:38.143774+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:39.143922+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:40.144072+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:41.144223+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:42.144423+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:43.144545+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:44.144759+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:45.144918+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:46.145095+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:47.145285+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:48.145537+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:49.145856+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:50.146048+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1351680 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:51.146230+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:52.146427+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:53.146589+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:54.146801+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:55.147001+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:56.147206+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:57.147422+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:58.147627+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:08:59.147809+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:00.147987+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:01.148161+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:02.148355+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:03.148524+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:04.148713+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:05.148883+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:06.149054+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:07.149276+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:08.149455+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:09.149644+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:10.149836+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:11.150011+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:12.150468+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:13.150657+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:14.150844+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:15.151077+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:16.151326+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:17.151482+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb7ee1/0x177000, compress 0x0/0x0/0x0, omap 0x12a75, meta 0x2bbd58b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:18.151670+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991051 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 198.224487305s of 198.447875977s, submitted: 106
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:19.151863+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1343488 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fceb0000/0x0/0x4ffc00000, data 0xb9a7d/0x17a000, compress 0x0/0x0/0x0, omap 0x12afa, meta 0x2bbd506), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 123 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:20.152037+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 1327104 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xbd225/0x180000, compress 0x0/0x0/0x0, omap 0x12b7f, meta 0x2bbd481), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:21.152199+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 1327104 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:22.152405+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1318912 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:23.152604+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001021 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1318912 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:24.152996+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1318912 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xbd225/0x180000, compress 0x0/0x0/0x0, omap 0x12b7f, meta 0x2bbd481), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:25.153190+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1302528 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:26.153336+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1302528 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:27.153550+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1294336 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xbd225/0x180000, compress 0x0/0x0/0x0, omap 0x12b7f, meta 0x2bbd481), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:28.153886+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004259 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 1277952 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:29.154075+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 1277952 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:30.154223+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 1277952 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:31.154494+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.737357140s of 12.798511505s, submitted: 12
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:32.154837+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:33.154972+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005641 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:34.155098+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:35.155291+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:36.155491+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:37.155711+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:38.155914+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005641 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:39.156112+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:40.156288+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:41.156516+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:42.156824+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:43.157041+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005641 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:44.157194+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:45.157350+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:46.157596+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:47.157755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:48.157867+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005641 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:49.158061+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:50.158203+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:51.158410+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:52.158626+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1269760 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:53.158792+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 12
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005641 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:54.158944+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:55.159215+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:56.159410+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:57.159583+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:58.159719+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005641 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:09:59.159904+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:00.160083+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 13
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:01.160217+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.223926544s of 30.229436874s, submitted: 9
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1146880 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:02.160350+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea4000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:03.160504+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005081 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:04.160670+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:05.160876+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:06.161105+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:07.161322+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:08.161471+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005081 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:09.161607+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:10.161740+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1220608 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:11.161899+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.068866730s of 10.162009239s, submitted: 5
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:12.162072+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:13.162278+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005097 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:14.162404+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:15.162548+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:16.162772+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:17.162899+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:18.163067+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005081 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:19.163260+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:20.163389+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:21.163512+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1212416 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:22.164098+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:23.165443+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005081 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:24.166228+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:25.166485+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:26.167211+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:27.168254+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:28.168534+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005081 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:29.168871+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:30.169080+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:31.169247+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:32.169676+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.137937546s of 21.150382996s, submitted: 2
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:33.170038+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005081 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:34.170246+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:35.170487+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:36.170683+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:37.171237+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:38.171688+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005097 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:39.171920+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc085c/0x186000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1204224 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:40.172226+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:41.172515+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:42.172892+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:43.173118+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008415 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fcea1000/0x0/0x4ffc00000, data 0xc2461/0x189000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:44.173383+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:45.173667+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.675517082s of 12.743407249s, submitted: 23
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:46.173914+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:47.174148+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:48.174353+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008559 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 1179648 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:49.174495+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fcea1000/0x0/0x4ffc00000, data 0xc2461/0x189000, compress 0x0/0x0/0x0, omap 0x12cdd, meta 0x2bbd323), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:50.174649+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:51.174862+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:52.175023+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:53.175266+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011333 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:54.175413+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:55.175588+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:56.175759+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:57.175946+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:58.176133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011333 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:10:59.176374+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:00.176650+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:01.176845+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:02.177140+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:03.177389+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011333 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:04.177633+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:05.177879+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:06.178092+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:07.178332+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:08.178589+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011333 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:09.178803+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:10.179037+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:11.179211+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:12.179464+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:13.179685+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011333 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:14.179941+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9e000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:15.180090+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:16.180220+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 1163264 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:17.180384+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.992893219s of 32.014202118s, submitted: 13
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:18.180677+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010629 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:19.180887+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fcea0000/0x0/0x4ffc00000, data 0xc3ee0/0x18c000, compress 0x0/0x0/0x0, omap 0x12db4, meta 0x2bbd24c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:20.181163+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:21.181389+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:22.181660+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 129 handle_osd_map epochs [129,130], i have 130, src has [1,130]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:23.181857+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013963 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:24.182063+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557eeffcdc00
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 2039808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9b000/0x0/0x4ffc00000, data 0xc5ae5/0x18f000, compress 0x0/0x0/0x0, omap 0x12e39, meta 0x2bbd1c7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:25.182270+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 2039808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:26.182493+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 2039808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:27.182725+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.967432022s of 10.007649422s, submitted: 22
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 2039808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:28.182965+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1015799 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 2023424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:29.183115+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 2023424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:30.183355+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9a000/0x0/0x4ffc00000, data 0xc5b80/0x190000, compress 0x0/0x0/0x0, omap 0x12e39, meta 0x2bbd1c7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 2023424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 130 handle_osd_map epochs [130,131], i have 131, src has [1,131]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:31.183590+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 2015232 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:32.183841+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fce98000/0x0/0x4ffc00000, data 0xc7564/0x192000, compress 0x0/0x0/0x0, omap 0x12f0e, meta 0x2bbd0f2), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 2015232 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:33.183977+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026023 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 958464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:34.184185+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xcae69/0x199000, compress 0x0/0x0/0x0, omap 0x12f93, meta 0x2bbd06d), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 958464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:35.184387+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:36.184575+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:37.184826+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.337251663s of 10.527725220s, submitted: 94
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:38.185059+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029257 data_alloc: 218103808 data_used: 4685
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xccb39/0x19d000, compress 0x0/0x0/0x0, omap 0x13018, meta 0x2bbcfe8), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:39.185231+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:40.185428+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 917504 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:41.185627+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:42.185799+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:43.185956+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034807 data_alloc: 218103808 data_used: 5335
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:44.186142+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd00d7/0x1a1000, compress 0x0/0x0/0x0, omap 0x13160, meta 0x2bbcea0), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:45.186314+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:46.186541+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:47.186702+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd00d7/0x1a1000, compress 0x0/0x0/0x0, omap 0x13160, meta 0x2bbcea0), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:48.186977+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033161 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:49.187184+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:50.187411+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd00d7/0x1a1000, compress 0x0/0x0/0x0, omap 0x13160, meta 0x2bbcea0), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 136 handle_osd_map epochs [136,137], i have 137, src has [1,137]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.982851982s of 13.036751747s, submitted: 35
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:51.187634+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:52.187887+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:53.188149+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035935 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:54.188319+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:55.188514+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:56.188690+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:57.188852+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:58.188982+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035935 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:11:59.189127+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:00.189261+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:01.189476+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:02.189676+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:03.189857+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035935 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:04.190060+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:05.190450+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:06.190663+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:07.190787+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:08.190941+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.451763153s of 17.465898514s, submitted: 13
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:09.191040+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:10.191181+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:11.191315+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:12.191959+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:13.192115+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:14.192275+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:15.192401+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:16.192525+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:17.192676+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:18.192824+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:19.192973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:20.193118+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:21.193280+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:22.193491+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:23.193691+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:24.193847+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.452283859s of 16.457082748s, submitted: 3
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:25.194040+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:26.194206+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:27.194406+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce87000/0x0/0x4ffc00000, data 0xd1c11/0x1a5000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:28.194661+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:29.194834+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:30.195033+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:31.195292+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xd1b76/0x1a4000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:32.195660+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:33.195981+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0xd377b/0x1a7000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039429 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:34.196260+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:35.196519+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:36.196750+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:37.196930+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:38.197101+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039429 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:39.197265+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0xd377b/0x1a7000, compress 0x0/0x0/0x0, omap 0x132f5, meta 0x2bbcd0b), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:40.197423+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 138 handle_osd_map epochs [138,139], i have 139, src has [1,139]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.945674896s of 16.088001251s, submitted: 26
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:41.197617+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:42.197821+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:43.198007+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042059 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:44.198137+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce80000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:45.198279+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:46.198465+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:47.198667+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:48.198867+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041499 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:49.199057+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:50.199279+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:51.199432+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:52.199673+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:53.199846+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041499 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:54.200021+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 1892352 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:55.200178+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 1884160 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:56.200395+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 1884160 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:57.200641+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 1875968 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.770023346s of 16.792446136s, submitted: 17
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:58.200809+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 1875968 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044723 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:12:59.200983+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 802816 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557eee7f2800
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557ef14dc000
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:00.201158+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 540672 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce80000/0x0/0x4ffc00000, data 0xd5401/0x1ac000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 14
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:01.201297+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:02.201449+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce80000/0x0/0x4ffc00000, data 0xd5401/0x1ac000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:03.201588+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042441 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:04.201783+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:05.201927+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:06.202058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:07.202202+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:08.202421+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042457 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:09.202648+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:10.202896+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:11.203092+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:12.203282+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:13.203448+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042457 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:14.203609+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:15.203755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.198045731s of 17.835994720s, submitted: 7
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce82000/0x0/0x4ffc00000, data 0xd51fa/0x1aa000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:16.203875+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:17.204083+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:18.204237+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044133 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:19.204363+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce81000/0x0/0x4ffc00000, data 0xd5295/0x1ab000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:20.204626+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:21.204815+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:22.205033+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:23.205175+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 483328 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fce80000/0x0/0x4ffc00000, data 0xd5330/0x1ac000, compress 0x0/0x0/0x0, omap 0x133f6, meta 0x2bbcc0a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:24.205377+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044005 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 475136 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:25.205595+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 450560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:26.205758+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 450560 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.692792892s of 10.886620522s, submitted: 8
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:27.205929+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:28.206145+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:29.206328+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050133 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xd6f35/0x1af000, compress 0x0/0x0/0x0, omap 0x1347b, meta 0x2bbcb85), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xd6f35/0x1af000, compress 0x0/0x0/0x0, omap 0x1347b, meta 0x2bbcb85), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:30.206537+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:31.206842+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd6e9a/0x1ae000, compress 0x0/0x0/0x0, omap 0x1347b, meta 0x2bbcb85), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:32.207058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xd8919/0x1b1000, compress 0x0/0x0/0x0, omap 0x134b6, meta 0x2bbcb4a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:33.207206+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:34.207357+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051599 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 557056 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd887e/0x1b0000, compress 0x0/0x0/0x0, omap 0x134b6, meta 0x2bbcb4a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:35.207597+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:36.208408+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.414586067s of 10.583475113s, submitted: 55
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:37.208889+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:38.209097+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:39.209681+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053291 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:40.209869+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce79000/0x0/0x4ffc00000, data 0xd8919/0x1b1000, compress 0x0/0x0/0x0, omap 0x134b6, meta 0x2bbcb4a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:41.210113+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 548864 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:42.210362+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:43.210912+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7b000/0x0/0x4ffc00000, data 0xd8919/0x1b1000, compress 0x0/0x0/0x0, omap 0x134b6, meta 0x2bbcb4a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:44.211249+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054247 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:45.211660+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:46.212030+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd89b4/0x1b2000, compress 0x0/0x0/0x0, omap 0x134b6, meta 0x2bbcb4a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:47.212274+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:48.212458+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 516096 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.377759933s of 11.542425156s, submitted: 6
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:49.212738+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1055651 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:50.213050+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:51.213206+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 491520 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd89b4/0x1b2000, compress 0x0/0x0/0x0, omap 0x134b6, meta 0x2bbcb4a), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:52.213465+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 141 handle_osd_map epochs [141,142], i have 142, src has [1,142]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:53.213600+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:54.213776+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059239 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 434176 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xda6b2/0x1b6000, compress 0x0/0x0/0x0, omap 0x1353b, meta 0x2bbcac5), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:55.214012+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:56.214254+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 417792 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:57.214543+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:58.214731+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:13:59.214922+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058649 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:00.215058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fce77000/0x0/0x4ffc00000, data 0xda615/0x1b5000, compress 0x0/0x0/0x0, omap 0x1353b, meta 0x2bbcac5), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 142 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.344754219s of 12.445875168s, submitted: 51
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:01.215211+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:02.215499+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:03.215696+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:04.215895+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060849 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:05.216118+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:06.216329+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:07.216622+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:08.216765+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:09.216911+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060865 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:10.217267+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 401408 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 ms_handle_reset con 0x557eee7f2800 session 0x557eef659c00
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 ms_handle_reset con 0x557ef14dc000 session 0x557eef239a40
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:11.217409+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 40960 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.541168213s of 10.590239525s, submitted: 151
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:12.217643+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 15
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:13.217755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdbf52/0x1b6000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:14.217913+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060259 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:15.218426+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:16.218709+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdbf52/0x1b6000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:17.219414+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:18.219693+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdbf52/0x1b6000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:19.220279+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060259 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:20.220747+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:21.221091+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdbf52/0x1b6000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:22.221428+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.880774498s of 10.898983955s, submitted: 4
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:23.221638+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:24.221831+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:25.221997+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:26.222242+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:27.222491+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:28.222805+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:29.223005+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061951 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:30.223307+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:31.223673+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:32.223895+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:33.224038+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 24576 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.347071648s of 11.349162102s, submitted: 1
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:34.224268+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063483 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1073152 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0xdc062/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:35.224477+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1073152 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:36.224631+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1073152 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:37.224862+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1073152 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:38.225066+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 1073152 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:39.225242+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063499 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 1048576 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:40.225496+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc01d/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 1048576 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:41.225702+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 1048576 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:42.225922+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:43.226104+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:44.226325+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062765 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:45.226624+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:46.226763+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:47.226942+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:48.227101+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:49.227313+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062765 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:50.227618+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:51.227809+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:52.228056+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:53.228213+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:54.228406+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062765 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:55.228639+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:56.228866+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:57.229114+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7005 writes, 27K keys, 7005 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7005 writes, 1473 syncs, 4.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1137 writes, 2477 keys, 1137 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s
                                           Interval WAL: 1137 writes, 463 syncs, 2.46 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 1040384 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:58.229303+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.551012039s of 24.672239304s, submitted: 11
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 1032192 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:14:59.229474+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062781 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce75000/0x0/0x4ffc00000, data 0xdbfed/0x1b7000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:00.229717+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:01.229902+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:02.230190+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:03.230334+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 1024000 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:04.230524+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0xdc062/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc ms_handle_reset ms_handle_reset con 0x557eee306800
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2882926037
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: get_auth_request con 0x557ef16d7c00 auth_method 0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_configure stats_period=5
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064441 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:05.230699+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:06.230878+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:07.231029+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:08.231182+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc01d/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:09.231399+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.985914230s of 11.004243851s, submitted: 8
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062749 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 ms_handle_reset con 0x557ef08aec00 session 0x557eee902a80
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557ef08af000
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:10.231657+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:11.231874+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:12.232088+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:13.232313+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:14.232493+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0xdc04c/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064457 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:15.232626+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:16.232786+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc01d/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:17.233175+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:18.233312+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:19.233469+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064457 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:20.233655+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:21.234137+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.295702934s of 12.349294662s, submitted: 14
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:22.234408+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc01d/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:23.234866+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 778240 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:24.235135+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066005 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 778240 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:25.235612+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xdc1b4/0x1ba000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 720896 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:26.235789+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 720896 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:27.236209+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 720896 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:28.236400+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc119/0x1b9000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 712704 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:29.236582+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066005 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:30.236800+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0xdc0e4/0x1b9000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:31.237012+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:32.237306+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.681983948s of 11.158711433s, submitted: 24
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:33.237608+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:34.237747+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065431 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:35.237896+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:36.238123+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc04f/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:37.238380+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:38.238704+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:39.238908+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065271 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 688128 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:40.239120+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:41.239292+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:42.239674+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xdc119/0x1b9000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:43.239856+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xdc119/0x1b9000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.432433128s of 10.524833679s, submitted: 20
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 630784 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:44.240021+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066373 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 606208 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:45.240174+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 606208 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:46.240340+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 598016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:47.240471+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 598016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:48.240643+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc049/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 598016 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:49.240793+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc04f/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067475 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 589824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:50.240992+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc04f/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 589824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:51.241124+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 589824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:52.241308+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 589824 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:53.241472+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xdc01d/0x1b8000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:54.241657+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067187 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:55.241844+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.921222687s of 12.012028694s, submitted: 19
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:56.242003+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:57.242248+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:58.242417+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0xdc0e7/0x1b9000, compress 0x0/0x0/0x0, omap 0x1363f, meta 0x2bbc9c1), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 516096 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:15:59.242527+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067715 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 532480 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:00.242714+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:01.242842+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:02.243030+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce70000/0x0/0x4ffc00000, data 0xddbb6/0x1ba000, compress 0x0/0x0/0x0, omap 0x136c4, meta 0x2bbc93c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:03.243181+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:04.243346+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069371 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:05.243490+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.634387016s of 10.006305695s, submitted: 157
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:06.243647+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:07.243813+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddb87/0x1ba000, compress 0x0/0x0/0x0, omap 0x136c4, meta 0x2bbc93c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddb87/0x1ba000, compress 0x0/0x0/0x0, omap 0x136c4, meta 0x2bbc93c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:08.243976+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1441792 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:09.244136+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0xddb87/0x1ba000, compress 0x0/0x0/0x0, omap 0x136c4, meta 0x2bbc93c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071063 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 1392640 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:10.244343+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 1392640 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:11.244462+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 1384448 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:12.244637+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:13.244769+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce6c000/0x0/0x4ffc00000, data 0xdf6cf/0x1be000, compress 0x0/0x0/0x0, omap 0x137d9, meta 0x2bbc827), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:14.244924+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073087 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:15.245125+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:16.245371+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce6f000/0x0/0x4ffc00000, data 0xdf606/0x1bd000, compress 0x0/0x0/0x0, omap 0x137d9, meta 0x2bbc827), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:17.245542+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce6f000/0x0/0x4ffc00000, data 0xdf606/0x1bd000, compress 0x0/0x0/0x0, omap 0x137d9, meta 0x2bbc827), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.389775276s of 12.429254532s, submitted: 25
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:18.245725+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1376256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:19.245973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078195 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1376256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:20.246161+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xdf7fa/0x1c0000, compress 0x0/0x0/0x0, omap 0x137d9, meta 0x2bbc827), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1376256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:21.246362+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 1376256 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:22.246599+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:23.246781+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce6b000/0x0/0x4ffc00000, data 0xdf7c7/0x1c0000, compress 0x0/0x0/0x0, omap 0x137d9, meta 0x2bbc827), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:24.246962+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077891 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:25.247138+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 1368064 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:26.247317+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557ef0d46c00
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 16
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0xdfae3/0x1c3000, compress 0x0/0x0/0x0, omap 0x137d9, meta 0x2bbc827), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 1335296 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:27.247464+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 1318912 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:28.247604+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.536841393s of 10.588269234s, submitted: 24
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:29.247809+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xdfbac/0x1c4000, compress 0x0/0x0/0x0, omap 0x1394a, meta 0x2bbc6b6), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xdf890/0x1c1000, compress 0x0/0x0/0x0, omap 0x1394a, meta 0x2bbc6b6), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084077 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:30.247973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:31.248123+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:32.248292+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:33.248503+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:34.248685+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0xdf890/0x1c1000, compress 0x0/0x0/0x0, omap 0x1394a, meta 0x2bbc6b6), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084093 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:35.248836+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 1310720 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:36.249012+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 1302528 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:37.249170+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 1302528 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:38.249314+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.982058525s of 10.004622459s, submitted: 10
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1294336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:39.249455+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082543 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1294336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:40.249639+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0xdf8f9/0x1c2000, compress 0x0/0x0/0x0, omap 0x1394a, meta 0x2bbc6b6), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 1294336 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:41.250662+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fce68000/0x0/0x4ffc00000, data 0xdf92c/0x1c2000, compress 0x0/0x0/0x0, omap 0x1394a, meta 0x2bbc6b6), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1220608 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:42.250828+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 1220608 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:43.250988+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 737280 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:44.251133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083663 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:45.251293+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 688128 heap: 85975040 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:46.251448+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 385024 heap: 87023616 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:47.251630+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 786432 heap: 88072192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fbc6a000/0x0/0x4ffc00000, data 0x141a70/0x222000, compress 0x0/0x0/0x0, omap 0x1394a, meta 0x3d5c6b6), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:48.251783+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 425984 heap: 88072192 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbc11000/0x0/0x4ffc00000, data 0x195c5c/0x277000, compress 0x0/0x0/0x0, omap 0x13a11, meta 0x3d5c5ef), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:49.251937+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 88129536 unmapped: 991232 heap: 89120768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.359784126s of 11.173682213s, submitted: 101
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101479 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:50.252133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 88252416 unmapped: 868352 heap: 89120768 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbc11000/0x0/0x4ffc00000, data 0x195c5c/0x277000, compress 0x0/0x0/0x0, omap 0x13a11, meta 0x3d5c5ef), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:51.252248+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbbfc000/0x0/0x4ffc00000, data 0x1aade2/0x28c000, compress 0x0/0x0/0x0, omap 0x13a11, meta 0x3d5c5ef), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 1482752 heap: 90169344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbbfc000/0x0/0x4ffc00000, data 0x1aade2/0x28c000, compress 0x0/0x0/0x0, omap 0x13a11, meta 0x3d5c5ef), peers [1,2] op hist [1])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:52.252413+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 1351680 heap: 90169344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:53.252608+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 1351680 heap: 90169344 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:54.252750+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90554368 unmapped: 663552 heap: 91217920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105755 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:55.252890+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 425984 heap: 91217920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:56.253058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90791936 unmapped: 425984 heap: 91217920 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbb7d000/0x0/0x4ffc00000, data 0x22c005/0x30e000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,4])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:57.253237+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 91176960 unmapped: 1089536 heap: 92266496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:58.253450+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90611712 unmapped: 1654784 heap: 92266496 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:16:59.253647+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90685440 unmapped: 2629632 heap: 93315072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbb4f000/0x0/0x4ffc00000, data 0x25abaa/0x33c000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107895 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:00.253829+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 2424832 heap: 93315072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbb4f000/0x0/0x4ffc00000, data 0x25abaa/0x33c000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:01.253943+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 2416640 heap: 93315072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.056671143s of 11.827801704s, submitted: 72
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:02.254190+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90742784 unmapped: 2572288 heap: 93315072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:03.254391+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 2768896 heap: 93315072 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:04.254596+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 2048000 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118555 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:05.254754+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 2031616 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbad5000/0x0/0x4ffc00000, data 0x2d2b2d/0x3b5000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:06.254881+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 1654784 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:07.255020+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92184576 unmapped: 2179072 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:08.255253+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92184576 unmapped: 2179072 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbac4000/0x0/0x4ffc00000, data 0x2e4ea0/0x3c8000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:09.255414+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 1974272 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119783 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:10.255665+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 1753088 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fba6c000/0x0/0x4ffc00000, data 0x33be94/0x41f000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:11.255807+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 1736704 heap: 94363648 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.436212540s of 10.000144005s, submitted: 65
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:12.255943+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93650944 unmapped: 1761280 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:13.256104+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fba1c000/0x0/0x4ffc00000, data 0x38cfe2/0x46f000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 1818624 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:14.256241+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93519872 unmapped: 1892352 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fba1a000/0x0/0x4ffc00000, data 0x38fb3e/0x471000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:15.256459+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124993 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93937664 unmapped: 1474560 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:16.256612+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 1613824 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb9ee000/0x0/0x4ffc00000, data 0x3bc512/0x49e000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:17.256755+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93724672 unmapped: 1687552 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:18.256920+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 93364224 unmapped: 2048000 heap: 95412224 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:19.257088+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 1916928 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:20.257246+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132719 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 1916928 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:21.257430+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94330880 unmapped: 2129920 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.803009033s of 10.000171661s, submitted: 90
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:22.257628+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 1843200 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb924000/0x0/0x4ffc00000, data 0x4868c5/0x568000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:23.257795+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94527488 unmapped: 1933312 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:24.258008+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 1507328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:25.258167+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129921 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 1507328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:26.258301+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 1507328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:27.258461+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c9000/0x0/0x4ffc00000, data 0x4e2c8a/0x5c3000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 95043584 unmapped: 2465792 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:28.258652+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 95043584 unmapped: 2465792 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:29.258779+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 2457600 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:30.258916+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130513 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:31.259056+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.888851166s of 10.042918205s, submitted: 34
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:32.259220+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c8000/0x0/0x4ffc00000, data 0x4e2c8a/0x5c3000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:33.259382+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:34.259549+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:35.259794+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130081 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c8000/0x0/0x4ffc00000, data 0x4e2cbc/0x5c3000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:36.259973+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:37.260188+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:38.260408+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:39.260642+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:40.260854+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130081 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 2924544 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 ms_handle_reset con 0x557ef0d46c00 session 0x557ef14896c0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:41.261033+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 2711552 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c9000/0x0/0x4ffc00000, data 0x4e2cbc/0x5c3000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:42.261315+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 2711552 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 17
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:43.261636+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.388603210s of 11.564454079s, submitted: 141
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:44.261882+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:45.262129+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130081 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c8000/0x0/0x4ffc00000, data 0x4e2c8a/0x5c3000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:46.262297+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:47.262462+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:48.262670+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c7000/0x0/0x4ffc00000, data 0x4e2d54/0x5c4000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c7000/0x0/0x4ffc00000, data 0x4e2d54/0x5c4000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:49.262788+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:50.262932+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131757 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:51.263139+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:52.263340+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c7000/0x0/0x4ffc00000, data 0x4e2d25/0x5c4000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:53.263486+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2629632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.771835327s of 10.819510460s, submitted: 18
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:54.263640+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94887936 unmapped: 2621440 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:55.263795+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134439 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94887936 unmapped: 2621440 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:56.263939+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94887936 unmapped: 2621440 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c5000/0x0/0x4ffc00000, data 0x4e2e1b/0x5c5000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:57.264109+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb8c4000/0x0/0x4ffc00000, data 0x4e2eb6/0x5c6000, compress 0x0/0x0/0x0, omap 0x13b22, meta 0x3d5c4de), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:58.264234+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:17:59.264397+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:00.264589+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135667 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:01.264768+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:02.264961+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb8bf000/0x0/0x4ffc00000, data 0x4e4ac1/0x5c9000, compress 0x0/0x0/0x0, omap 0x1b1b2, meta 0x3d54e4e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:03.265078+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb8bf000/0x0/0x4ffc00000, data 0x4e4ac1/0x5c9000, compress 0x0/0x0/0x0, omap 0x1b1b2, meta 0x3d54e4e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2613248 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:04.265201+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2605056 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.452872276s of 10.671550751s, submitted: 54
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:05.265305+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2605056 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:06.265440+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2605056 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:07.265629+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb8c0000/0x0/0x4ffc00000, data 0x4e657b/0x5ca000, compress 0x0/0x0/0x0, omap 0x1b48e, meta 0x3d54b72), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:08.265787+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:09.265948+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:10.266085+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb8c2000/0x0/0x4ffc00000, data 0x4e64e6/0x5c9000, compress 0x0/0x0/0x0, omap 0x1b48e, meta 0x3d54b72), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141377 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:11.266228+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:12.266433+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:13.266594+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:14.266711+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:15.266868+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144151 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:16.267026+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb8bf000/0x0/0x4ffc00000, data 0x4e7f33/0x5cc000, compress 0x0/0x0/0x0, omap 0x1b735, meta 0x3d548cb), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.642515182s of 11.862696648s, submitted: 49
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:17.267151+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:18.267293+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:19.267466+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:20.267700+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb8bf000/0x0/0x4ffc00000, data 0x4e7f66/0x5cc000, compress 0x0/0x0/0x0, omap 0x1b735, meta 0x3d548cb), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144295 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:21.267855+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb8bf000/0x0/0x4ffc00000, data 0x4e7f33/0x5cc000, compress 0x0/0x0/0x0, omap 0x1b735, meta 0x3d548cb), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:22.268032+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:23.268217+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:24.268386+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:25.268717+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144295 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:26.269172+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb8bf000/0x0/0x4ffc00000, data 0x4e7f65/0x5cc000, compress 0x0/0x0/0x0, omap 0x1b735, meta 0x3d548cb), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:27.269390+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.100579262s of 11.133728981s, submitted: 14
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:28.270062+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 2596864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:29.271794+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:30.272486+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144295 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:31.273106+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:32.273297+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb8bb000/0x0/0x4ffc00000, data 0x4e9b38/0x5cf000, compress 0x0/0x0/0x0, omap 0x1ba14, meta 0x3d545ec), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:33.274188+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:34.275002+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:35.275144+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146925 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:36.275749+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:37.275972+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:38.276115+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb8bb000/0x0/0x4ffc00000, data 0x4e9c00/0x5d0000, compress 0x0/0x0/0x0, omap 0x1ba14, meta 0x3d545ec), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:39.276377+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:40.276624+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb8bb000/0x0/0x4ffc00000, data 0x4e9c00/0x5d0000, compress 0x0/0x0/0x0, omap 0x1ba14, meta 0x3d545ec), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148761 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2588672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:41.277319+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.414926529s of 13.485723495s, submitted: 31
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2580480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:42.277535+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2564096 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:43.277760+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2564096 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:44.277900+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2564096 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb8b3000/0x0/0x4ffc00000, data 0x4ed2e1/0x5d7000, compress 0x0/0x0/0x0, omap 0x1bf72, meta 0x3d5408e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:45.278093+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155123 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2555904 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:46.278280+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2555904 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:47.278435+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fb8b5000/0x0/0x4ffc00000, data 0x4ed219/0x5d6000, compress 0x0/0x0/0x0, omap 0x1bf72, meta 0x3d5408e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2555904 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:48.278609+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 1507328 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:49.278823+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 1507328 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:50.279028+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158473 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96002048 unmapped: 1507328 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:51.279268+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.187466621s of 10.291038513s, submitted: 62
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 1499136 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:52.279493+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 1499136 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb8ad000/0x0/0x4ffc00000, data 0x4f090b/0x5dd000, compress 0x0/0x0/0x0, omap 0x1c4fb, meta 0x3d53b05), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:53.279657+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 1499136 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:54.280052+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 1499136 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb8b0000/0x0/0x4ffc00000, data 0x4f0870/0x5dc000, compress 0x0/0x0/0x0, omap 0x1c4fb, meta 0x3d53b05), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:55.280241+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb8b0000/0x0/0x4ffc00000, data 0x4f0870/0x5dc000, compress 0x0/0x0/0x0, omap 0x1c4fb, meta 0x3d53b05), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160511 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 1490944 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:56.280431+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96018432 unmapped: 1490944 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 155 handle_osd_map epochs [155,156], i have 156, src has [1,156]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:57.280681+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb8ac000/0x0/0x4ffc00000, data 0x4f2445/0x5de000, compress 0x0/0x0/0x0, omap 0x1c7e2, meta 0x3d5381e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 1482752 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:58.280842+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 1482752 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:18:59.280979+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 1482752 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:00.281170+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163271 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 1482752 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:01.281360+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fb8ac000/0x0/0x4ffc00000, data 0x4f2445/0x5de000, compress 0x0/0x0/0x0, omap 0x1c7e2, meta 0x3d5381e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.435250282s of 10.148321152s, submitted: 35
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:02.281651+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fb8ac000/0x0/0x4ffc00000, data 0x4f2445/0x5de000, compress 0x0/0x0/0x0, omap 0x1c7e2, meta 0x3d5381e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:03.281796+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:04.281965+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fb8aa000/0x0/0x4ffc00000, data 0x4f3f5f/0x5e2000, compress 0x0/0x0/0x0, omap 0x1ca84, meta 0x3d5357c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:05.282061+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166443 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:06.282265+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:07.282395+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:08.282681+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:09.282868+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:10.283042+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fb8aa000/0x0/0x4ffc00000, data 0x4f3ec4/0x5e1000, compress 0x0/0x0/0x0, omap 0x1ca84, meta 0x3d5357c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166443 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:11.283261+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1466368 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fb8aa000/0x0/0x4ffc00000, data 0x4f3ec4/0x5e1000, compress 0x0/0x0/0x0, omap 0x1ca84, meta 0x3d5357c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:12.283490+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.955768585s of 10.971647263s, submitted: 14
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 1458176 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:13.283731+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 1458176 heap: 98557952 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:14.283970+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 2482176 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:15.284129+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fb8a5000/0x0/0x4ffc00000, data 0x4f76fe/0x5e7000, compress 0x0/0x0/0x0, omap 0x1cf92, meta 0x3d5306e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171017 data_alloc: 218103808 data_used: 5985
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 2482176 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:16.284373+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 2482176 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:17.284511+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97124352 unmapped: 2482176 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:18.284694+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 159 handle_osd_map epochs [159,160], i have 160, src has [1,160]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 2441216 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:19.284834+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x4f91a9/0x5ea000, compress 0x0/0x0/0x0, omap 0x1d2fa, meta 0x3d52d06), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 2441216 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:20.285070+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174495 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x4f91a9/0x5ea000, compress 0x0/0x0/0x0, omap 0x1d2fa, meta 0x3d52d06), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 2441216 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:21.285253+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 2441216 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:22.285476+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.687158585s of 10.030435562s, submitted: 54
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2433024 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb8a0000/0x0/0x4ffc00000, data 0x4f91a9/0x5ea000, compress 0x0/0x0/0x0, omap 0x1d2fa, meta 0x3d52d06), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:23.285629+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2433024 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:24.285886+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2433024 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:25.286133+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2433024 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:26.286362+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2433024 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:27.286635+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:28.286872+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:29.287024+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:30.287345+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:31.287603+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:32.287841+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:33.288188+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:34.288394+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:35.288814+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:36.289039+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:37.289351+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:38.289715+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:39.290015+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:40.290271+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:41.290448+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:42.290684+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2400256 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:43.290866+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:44.291013+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:45.291142+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:46.291297+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:47.291524+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:48.291709+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:49.291859+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:50.292130+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:51.292292+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:52.292454+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:53.292633+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:54.292849+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:55.293000+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:56.293163+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:57.293316+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:58.293487+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:19:59.293627+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:00.293790+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:01.293970+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2392064 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:02.294175+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:03.294293+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:04.294421+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:05.294603+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:06.294735+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:07.294883+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:08.295047+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:09.295233+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:10.295390+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:11.295539+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:12.295785+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:13.295980+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:14.296203+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:15.296414+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177269 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:16.297177+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:17.297357+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:18.297511+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.167118073s of 56.172733307s, submitted: 11
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:19.297638+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:20.297753+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176709 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:21.297884+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:22.298003+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89f000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:23.298109+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89f000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:24.298323+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:25.298533+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176709 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:26.298765+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:27.298934+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89f000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:28.299077+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:29.299289+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.314463615s of 10.451193810s, submitted: 7
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89f000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:30.299454+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176725 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 2383872 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:31.299623+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:32.299924+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:33.300087+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:34.300245+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89f000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:35.300633+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176725 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:36.300823+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:37.301013+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:38.301191+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89f000/0x0/0x4ffc00000, data 0x4fac48/0x5ed000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:39.301992+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:40.302457+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176725 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:41.302593+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:42.302848+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.354802132s of 13.536408424s, submitted: 1
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:43.303153+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:44.303460+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:45.303614+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178385 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:46.303990+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:47.304155+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:48.304411+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:49.304588+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2375680 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: handle_auth_request added challenge on 0x557ef0d47000
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:50.304838+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 18
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179933 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:51.305079+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89d000/0x0/0x4ffc00000, data 0x4fadf4/0x5ef000, compress 0x0/0x0/0x0, omap 0x1d609, meta 0x3d529f7), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:52.305394+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.026954651s of 10.036269188s, submitted: 4
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:53.305660+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 19
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:54.305981+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:55.306261+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:56.306495+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:57.306659+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:58.306853+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:20:59.307051+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:00.307182+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:01.307318+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:02.307507+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:03.307679+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:04.307834+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:05.308100+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:06.308291+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:07.308424+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:08.308658+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:09.308867+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:10.309113+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:11.309279+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:12.309482+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:13.309666+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:14.309983+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:15.310107+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:16.310321+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:17.310452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:18.310597+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:19.310754+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:20.310935+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:21.311091+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:22.311241+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:23.352199+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:24.352309+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:25.352424+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:26.352596+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:27.352723+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:28.352841+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:29.352961+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:30.353122+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:31.353396+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:32.353797+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:33.353944+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:34.354078+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 2244608 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:35.354260+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:36.354436+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:37.354617+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:38.354782+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:39.354939+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:40.355194+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:41.355452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:42.355637+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:43.355767+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 20
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:44.355899+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:45.356020+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:46.356198+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180683 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:47.356362+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:48.356509+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:49.356625+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2236416 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 21
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.039428711s of 57.044803619s, submitted: 3
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:50.356770+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:51.357103+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180667 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:52.357305+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:53.357466+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:54.357694+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:55.357882+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:56.358015+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180667 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:57.358204+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:58.358381+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:21:59.358499+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb89e000/0x0/0x4ffc00000, data 0x4face3/0x5ee000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:00.358638+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:01.358757+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180667 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:02.358923+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:03.359022+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:04.359083+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.553768158s of 14.558128357s, submitted: 2
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:05.359510+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb888000/0x0/0x4ffc00000, data 0x510520/0x604000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:06.359636+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184165 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 2228224 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:07.359770+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 2138112 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:08.359886+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 2138112 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:09.360049+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb863000/0x0/0x4ffc00000, data 0x5356fe/0x629000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 2138112 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:10.360188+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97468416 unmapped: 2138112 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:11.360334+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186667 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 2129920 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 ms_handle_reset con 0x557ef0d47000 session 0x557eef239500
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:12.360612+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 3031040 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:13.360778+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 2965504 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 22
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb851000/0x0/0x4ffc00000, data 0x5477c8/0x63b000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:14.360924+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 2965504 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:15.361079+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 2965504 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.170724869s of 11.313410759s, submitted: 151
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:16.361218+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189211 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96608256 unmapped: 2998272 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb838000/0x0/0x4ffc00000, data 0x560c59/0x654000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:17.361361+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 2842624 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:18.361490+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 2834432 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:19.361613+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb812000/0x0/0x4ffc00000, data 0x586976/0x67a000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97009664 unmapped: 2596864 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:20.361736+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 2555904 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb80e000/0x0/0x4ffc00000, data 0x589e3d/0x67e000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:21.361894+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191041 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 2523136 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:22.362016+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 2416640 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:23.362136+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb7da000/0x0/0x4ffc00000, data 0x5be532/0x6b2000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97189888 unmapped: 2416640 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:24.362290+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 2310144 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:25.362455+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 2146304 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.092599869s of 10.044796944s, submitted: 19
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:26.362682+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 2146304 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb7a9000/0x0/0x4ffc00000, data 0x5ef1e4/0x6e3000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:27.362847+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 2146304 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:28.362994+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 2023424 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:29.363172+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 2048000 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:30.363298+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fb788000/0x0/0x4ffc00000, data 0x610188/0x704000, compress 0x0/0x0/0x0, omap 0x1d762, meta 0x3d5289e), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 2048000 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:31.363458+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193775 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 2048000 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:32.363641+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 933888 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb760000/0x0/0x4ffc00000, data 0x6362b8/0x72a000, compress 0x0/0x0/0x0, omap 0x1db1d, meta 0x3d524e3), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:33.363873+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 933888 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:34.364058+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 860160 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:35.364190+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 860160 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb760000/0x0/0x4ffc00000, data 0x6362b8/0x72a000, compress 0x0/0x0/0x0, omap 0x1db1d, meta 0x3d524e3), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:36.364314+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197425 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 860160 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.506530762s of 10.829941750s, submitted: 38
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fb745000/0x0/0x4ffc00000, data 0x653679/0x747000, compress 0x0/0x0/0x0, omap 0x1db1d, meta 0x3d524e3), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:37.364534+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 704512 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:38.364799+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:39.364922+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:40.365068+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:41.365222+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:42.365383+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:43.365659+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:44.365792+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:45.365936+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:46.366052+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:47.366196+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:48.366351+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:49.366506+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:50.366675+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:51.366799+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:52.366970+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:53.367119+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:54.367297+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:55.367480+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:56.367699+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:57.367820+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:58.368045+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:22:59.368209+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:00.368438+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:01.368639+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:02.368827+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:03.369004+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:04.369150+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb740000/0x0/0x4ffc00000, data 0x6550f8/0x74a000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:05.369312+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:06.369502+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201511 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:07.369661+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 655360 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.550893784s of 31.567113876s, submitted: 60
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:08.369786+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 696320 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb728000/0x0/0x4ffc00000, data 0x66ed15/0x764000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:09.369965+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 696320 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:10.370124+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 696320 heap: 99606528 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:11.370286+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201951 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 2129920 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:12.370461+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 2129920 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:13.370651+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 2129920 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb711000/0x0/0x4ffc00000, data 0x685c9a/0x77b000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:14.370822+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 2097152 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:15.371023+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 2097152 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:16.371209+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201967 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98934784 unmapped: 1720320 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:17.371349+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98934784 unmapped: 1720320 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:18.371526+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 99057664 unmapped: 1597440 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:19.371689+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 99328000 unmapped: 1327104 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb6b6000/0x0/0x4ffc00000, data 0x6e08ad/0x7d6000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.618362427s of 11.759558678s, submitted: 21
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:20.371871+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2162688 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:21.372035+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204631 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 2129920 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb6b6000/0x0/0x4ffc00000, data 0x6e08ad/0x7d6000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:22.372182+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 2129920 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb693000/0x0/0x4ffc00000, data 0x7038b8/0x7f9000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:23.372359+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 2088960 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:24.372832+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 2080768 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:25.372983+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 2080768 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb68f000/0x0/0x4ffc00000, data 0x7076ee/0x7fd000, compress 0x0/0x0/0x0, omap 0x1ddf1, meta 0x3d5220f), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:26.373197+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206375 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 1867776 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:27.373374+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 1810432 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:28.373526+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 1826816 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:29.373701+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 1826816 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:30.373902+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb663000/0x0/0x4ffc00000, data 0x7307f5/0x827000, compress 0x0/0x0/0x0, omap 0x1e0e4, meta 0x3d51f1c), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 1826816 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:31.374039+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212169 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.374418259s of 11.674894333s, submitted: 32
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 1572864 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:32.374212+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 1572864 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:33.374355+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 1572864 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:34.374515+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1966080 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:35.374682+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb652000/0x0/0x4ffc00000, data 0x7403c9/0x838000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1966080 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:36.374881+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214515 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1966080 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:37.375030+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:38.375170+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:39.375321+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:40.375497+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:41.375649+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214515 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:42.375818+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:43.376010+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:44.376157+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:45.376337+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:46.376605+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214515 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:47.376795+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:48.377045+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:49.377435+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:50.377674+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:51.377858+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214515 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:52.378061+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:53.378193+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:54.378452+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:55.378689+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:56.378868+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214515 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1941504 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:57.379101+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb64c000/0x0/0x4ffc00000, data 0x745d30/0x83e000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.405838013s of 25.665611267s, submitted: 13
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 2097152 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:58.379344+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 2097152 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:23:59.379531+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 1916928 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:00.379827+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 1916928 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:01.380000+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217623 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 1916928 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb61b000/0x0/0x4ffc00000, data 0x778749/0x871000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:02.380204+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 1916928 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:03.380377+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 630784 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:04.380534+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100089856 unmapped: 565248 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:05.380711+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100089856 unmapped: 565248 heap: 100655104 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:06.380869+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223693 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 835584 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:07.381040+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb5f7000/0x0/0x4ffc00000, data 0x79c6b8/0x895000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.346488953s of 10.627882004s, submitted: 28
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101089280 unmapped: 614400 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:08.381207+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101089280 unmapped: 614400 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:09.381380+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 606208 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:10.381610+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 1318912 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:11.381769+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fb576000/0x0/0x4ffc00000, data 0x81e28a/0x916000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225255 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 1318912 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:12.382006+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 165 handle_osd_map epochs [165,166], i have 166, src has [1,166]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 1351680 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fb56a000/0x0/0x4ffc00000, data 0x82a180/0x922000, compress 0x0/0x0/0x0, omap 0x1e378, meta 0x3d51c88), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:13.382327+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100442112 unmapped: 1261568 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:14.382472+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 1212416 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:15.382664+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 1212416 heap: 101703680 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fb555000/0x0/0x4ffc00000, data 0x83deab/0x937000, compress 0x0/0x0/0x0, omap 0x1e66e, meta 0x3d51992), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:16.382798+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230173 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 2400256 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:17.382968+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 2400256 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:18.383140+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 2400256 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:19.383302+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.120405197s of 11.340003014s, submitted: 39
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 2023424 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fb510000/0x0/0x4ffc00000, data 0x882963/0x97c000, compress 0x0/0x0/0x0, omap 0x1e66e, meta 0x3d51992), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:20.383464+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 2072576 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:21.383675+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235995 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 2072576 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:22.383863+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100761600 unmapped: 1990656 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:23.384113+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 2220032 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:24.384324+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fb4bd000/0x0/0x4ffc00000, data 0x8d4589/0x9cf000, compress 0x0/0x0/0x0, omap 0x1e8ff, meta 0x3d51701), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 2220032 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:25.384500+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fb4bd000/0x0/0x4ffc00000, data 0x8d4589/0x9cf000, compress 0x0/0x0/0x0, omap 0x1e8ff, meta 0x3d51701), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 2220032 heap: 102752256 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:26.384611+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238211 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fb4bd000/0x0/0x4ffc00000, data 0x8d4589/0x9cf000, compress 0x0/0x0/0x0, omap 0x1e8ff, meta 0x3d51701), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 1785856 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:27.384862+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 1785856 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:28.385018+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 1613824 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:29.385142+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 1613824 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:30.385293+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fb48e000/0x0/0x4ffc00000, data 0x900b64/0x9fc000, compress 0x0/0x0/0x0, omap 0x1ebf8, meta 0x3d51408), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 1613824 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:31.385421+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.150527954s of 12.367747307s, submitted: 56
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241787 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101326848 unmapped: 2473984 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fb48e000/0x0/0x4ffc00000, data 0x900b64/0x9fc000, compress 0x0/0x0/0x0, omap 0x1ebf8, meta 0x3d51408), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:32.385623+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101326848 unmapped: 2473984 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:33.385880+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101326848 unmapped: 2473984 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:34.386097+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 2449408 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:35.386252+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 2449408 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:36.386454+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245121 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 2465792 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:37.386879+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fb47b000/0x0/0x4ffc00000, data 0x911bf9/0xa0f000, compress 0x0/0x0/0x0, omap 0x1eecb, meta 0x3d51135), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2416640 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:38.387072+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2416640 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:39.387259+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2416640 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:40.387419+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fb457000/0x0/0x4ffc00000, data 0x93471c/0xa33000, compress 0x0/0x0/0x0, omap 0x1eecb, meta 0x3d51135), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 2359296 heap: 103800832 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:41.387625+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.961743355s of 10.000481606s, submitted: 23
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249503 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 3407872 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:42.387836+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 3407872 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:43.388009+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 3407872 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:44.388167+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101474304 unmapped: 3375104 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:45.388329+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 3506176 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:46.388581+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fb440000/0x0/0x4ffc00000, data 0x94e3e9/0xa4c000, compress 0x0/0x0/0x0, omap 0x1eecb, meta 0x3d51135), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255175 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 3031040 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:47.388746+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 3031040 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:48.388870+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 3031040 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:49.388999+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 2523136 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:50.389168+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fb3c7000/0x0/0x4ffc00000, data 0x9c80d5/0xac5000, compress 0x0/0x0/0x0, omap 0x1eecb, meta 0x3d51135), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fb3c3000/0x0/0x4ffc00000, data 0x9cc4b8/0xac9000, compress 0x0/0x0/0x0, omap 0x1eecb, meta 0x3d51135), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 2514944 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:51.389320+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.594641685s of 10.000421524s, submitted: 20
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253803 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fb3c3000/0x0/0x4ffc00000, data 0x9cc4b8/0xac9000, compress 0x0/0x0/0x0, omap 0x1eecb, meta 0x3d51135), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102359040 unmapped: 2490368 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:52.389548+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 3178496 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:53.389759+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 3178496 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:54.389966+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fb3b8000/0x0/0x4ffc00000, data 0x9d3a24/0xad2000, compress 0x0/0x0/0x0, omap 0x1f1c7, meta 0x3d50e39), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 3178496 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:55.390100+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fb3b8000/0x0/0x4ffc00000, data 0x9d3a24/0xad2000, compress 0x0/0x0/0x0, omap 0x1f1c7, meta 0x3d50e39), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101670912 unmapped: 3178496 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:56.390278+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258609 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101842944 unmapped: 3006464 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:57.390436+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9884 writes, 35K keys, 9884 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9884 writes, 2661 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2879 writes, 8594 keys, 2879 commit groups, 1.0 writes per commit group, ingest: 11.19 MB, 0.02 MB/s
                                           Interval WAL: 2879 writes, 1188 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101842944 unmapped: 3006464 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:58.390583+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 2818048 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:24:59.390712+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fb38d000/0x0/0x4ffc00000, data 0xa008e3/0xaff000, compress 0x0/0x0/0x0, omap 0x1f1c7, meta 0x3d50e39), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:00.390808+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fb38d000/0x0/0x4ffc00000, data 0xa008e3/0xaff000, compress 0x0/0x0/0x0, omap 0x1f1c7, meta 0x3d50e39), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:01.390903+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258609 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:02.391120+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:03.391266+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:04.391363+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.941542625s of 12.855049133s, submitted: 28
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:05.391582+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:06.391757+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:07.391959+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:08.392288+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:09.392425+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:10.392530+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:11.392652+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:12.392845+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:13.392979+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:14.393107+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _renew_subs
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:15.393309+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:16.393438+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:17.393605+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:18.393752+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:19.393866+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:20.393976+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:21.394198+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:22.394365+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:23.394509+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:24.394665+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:25.395437+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:26.395543+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:27.395719+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:28.395891+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:29.396023+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:30.396153+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:31.396281+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:32.396449+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:33.396683+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:34.396813+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:35.396950+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:36.397072+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:37.397480+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:38.397647+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:39.397772+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:40.397940+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:41.398066+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:42.398224+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:43.398380+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:44.398525+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:45.398703+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:46.398897+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:47.399107+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:48.399318+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:49.399539+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:50.399733+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:51.399923+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:52.400100+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:53.400245+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:54.400527+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:55.400767+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:56.400932+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260151 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:57.401166+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:58.401408+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:25:59.401630+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 2801664 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb388000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.510047913s of 55.517059326s, submitted: 10
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:00.401818+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 102080512 unmapped: 2768896 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:01.401962+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 4202496 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259431 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:02.402185+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 4169728 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:03.402368+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb38a000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:04.402500+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:05.402650+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:06.402772+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259431 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:07.402923+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:08.403040+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:09.403160+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb38a000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:10.403273+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 4161536 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.632461548s of 10.884867668s, submitted: 106
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:11.403368+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 4153344 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb38a000/0x0/0x4ffc00000, data 0xa02362/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259591 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:12.403515+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 ms_handle_reset con 0x557eeffcdc00 session 0x557ef0cfafc0
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb38a000/0x0/0x4ffc00000, data 0xa0247c/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Got map version 23
Jan 21 14:26:54 compute-0 ceph-osd[85740]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2882926037,v1:192.168.122.100:6801/2882926037]
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:13.403657+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:14.403786+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:15.403935+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb38a000/0x0/0x4ffc00000, data 0xa02575/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:16.404120+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259623 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:17.404270+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:18.404410+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:19.404534+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:20.404721+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 3964928 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:21.404900+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'config diff' '{prefix=config diff}'
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 21 14:26:54 compute-0 ceph-osd[85740]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fb38a000/0x0/0x4ffc00000, data 0xa02575/0xb02000, compress 0x0/0x0/0x0, omap 0x1f454, meta 0x3d50bac), peers [1,2] op hist [])
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'config show' '{prefix=config show}'
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 3653632 heap: 104849408 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'counter dump' '{prefix=counter dump}'
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'counter schema' '{prefix=counter schema}'
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 21 14:26:54 compute-0 ceph-osd[85740]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 21 14:26:54 compute-0 ceph-osd[85740]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259623 data_alloc: 218103808 data_used: 6635
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:22.405095+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 4595712 heap: 105897984 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: tick
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_tickets
Jan 21 14:26:54 compute-0 ceph-osd[85740]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-21T14:26:23.405272+0000)
Jan 21 14:26:54 compute-0 ceph-osd[85740]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 4292608 heap: 105897984 old mem: 2845415832 new mem: 2845415832
Jan 21 14:26:54 compute-0 ceph-osd[85740]: do_command 'log dump' '{prefix=log dump}'
Jan 21 14:26:54 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 21 14:26:54 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 21 14:26:54 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885487105' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 21 14:26:54 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 21 14:26:54 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005355596' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 21 14:26:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14677 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:55 compute-0 nova_compute[239261]: 2026-01-21 14:26:55.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:55 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14682 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 14:26:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/433541666' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 21 14:26:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1698848496' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 21 14:26:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3510320955' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 21 14:26:56 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3003765331' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 21 14:26:56 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} v 0)
Jan 21 14:26:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:56 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.718 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 21 14:26:56 compute-0 sudo[260227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:26:56 compute-0 sudo[260227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:26:56 compute-0 sudo[260227]: pam_unix(sudo:session): session closed for user root
Jan 21 14:26:56 compute-0 sudo[260282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 21 14:26:56 compute-0 sudo[260282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:26:56 compute-0 podman[260271]: 2026-01-21 14:26:56.902352567 +0000 UTC m=+0.090015557 container health_status 9cf15096c7daaca7e515449cc5ef22b9d7848cf51a7cd2219d568ed78a3b0ad2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 21 14:26:56 compute-0 podman[260270]: 2026-01-21 14:26:56.926852212 +0000 UTC m=+0.113359354 container health_status 65bb60c772116d0a56dfb466b5abb2441bc8cf17d2a580deeaa2ebbd1f4df488 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2cb4ddad64dc562b36b5eeb94c7ba654f2f471486c8ae39295a82365ae0eefbe-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b-46a313008caa07e8aa567d7f866145573589094bad78755ba06ee8a5dd6ae40b'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.930 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.931 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:56 compute-0 nova_compute[239261]: 2026-01-21 14:26:56.932 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:56 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 21 14:26:56 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262221715' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 systemd[1]: Starting Hostname Service...
Jan 21 14:26:57 compute-0 rsyslogd[1002]: imjournal: 16323 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 21 14:26:57 compute-0 sudo[260282]: pam_unix(sudo:session): session closed for user root
Jan 21 14:26:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:26:57 compute-0 systemd[1]: Started Hostname Service.
Jan 21 14:26:57 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:26:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:26:57 compute-0 ceph-mon[75031]: pgmap v1428: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:57 compute-0 ceph-mon[75031]: pgmap v1429: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3885487105' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4005355596' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14677 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14680 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14684 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14688 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.xeytxr", "name": "rgw_frontends"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: pgmap v1430: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/262221715' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: from='client.14692 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:26:57 compute-0 sudo[260433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:26:57 compute-0 sudo[260433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:26:57 compute-0 sudo[260433]: pam_unix(sudo:session): session closed for user root
Jan 21 14:26:57 compute-0 sudo[260467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 21 14:26:57 compute-0 sudo[260467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:26:57 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 21 14:26:57 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470918179' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 21 14:26:57 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.725 239265 DEBUG nova.compute.manager [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.725 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.765 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.765 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.766 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.766 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 21 14:26:57 compute-0 nova_compute[239261]: 2026-01-21 14:26:57.766 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915015450' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:58 compute-0 sudo[260467]: pam_unix(sudo:session): session closed for user root
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:26:58 compute-0 sudo[260652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:26:58 compute-0 sudo[260652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:26:58 compute-0 sudo[260652]: pam_unix(sudo:session): session closed for user root
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3470918179' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/915015450' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='client.14700 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851145430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.389 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:26:58 compute-0 sudo[260700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 21 14:26:58 compute-0 sudo[260700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.534 239265 WARNING nova.virt.libvirt.driver [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.535 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4724MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.536 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.536 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 21 14:26:58 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20496570' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.657 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.657 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 21 14:26:58 compute-0 nova_compute[239261]: 2026-01-21 14:26:58.684 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 21 14:26:58 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 21 14:26:58 compute-0 podman[260772]: 2026-01-21 14:26:58.726823432 +0000 UTC m=+0.079919823 container create bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 21 14:26:58 compute-0 podman[260772]: 2026-01-21 14:26:58.675966656 +0000 UTC m=+0.029063067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:26:58 compute-0 systemd[1]: Started libpod-conmon-bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088.scope.
Jan 21 14:26:58 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 21 14:26:58 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 21 14:26:59 compute-0 podman[260772]: 2026-01-21 14:26:59.023438758 +0000 UTC m=+0.376535219 container init bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 21 14:26:59 compute-0 podman[260772]: 2026-01-21 14:26:59.032505338 +0000 UTC m=+0.385601739 container start bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 21 14:26:59 compute-0 great_banzai[260861]: 167 167
Jan 21 14:26:59 compute-0 systemd[1]: libpod-bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088.scope: Deactivated successfully.
Jan 21 14:26:59 compute-0 podman[260772]: 2026-01-21 14:26:59.040074912 +0000 UTC m=+0.393171303 container attach bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 21 14:26:59 compute-0 podman[260772]: 2026-01-21 14:26:59.040881611 +0000 UTC m=+0.393978002 container died bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-961fe6d7b33d6a03a2f75484dd98d63dbfc7044dc7c645da7f1a232e13ebc0f8-merged.mount: Deactivated successfully.
Jan 21 14:26:59 compute-0 podman[260772]: 2026-01-21 14:26:59.109225802 +0000 UTC m=+0.462322193 container remove bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_banzai, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:26:59 compute-0 systemd[1]: libpod-conmon-bb19a5eec77aebd9e4f0cde87f9c303777575c9ef1a82f6cf703ef186354e088.scope: Deactivated successfully.
Jan 21 14:26:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 21 14:26:59 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110672913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:26:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 21 14:26:59 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611037400' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 21 14:26:59 compute-0 nova_compute[239261]: 2026-01-21 14:26:59.287 239265 DEBUG oslo_concurrency.processutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 21 14:26:59 compute-0 nova_compute[239261]: 2026-01-21 14:26:59.293 239265 DEBUG nova.compute.provider_tree [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed in ProviderTree for provider: 172aa181-ce4f-4953-808e-b8a26e60249f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 21 14:26:59 compute-0 podman[260936]: 2026-01-21 14:26:59.305041149 +0000 UTC m=+0.063870743 container create 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:26:59 compute-0 nova_compute[239261]: 2026-01-21 14:26:59.313 239265 DEBUG nova.scheduler.client.report [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Inventory has not changed for provider 172aa181-ce4f-4953-808e-b8a26e60249f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 21 14:26:59 compute-0 nova_compute[239261]: 2026-01-21 14:26:59.314 239265 DEBUG nova.compute.resource_tracker [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 21 14:26:59 compute-0 nova_compute[239261]: 2026-01-21 14:26:59.314 239265 DEBUG oslo_concurrency.lockutils [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 21 14:26:59 compute-0 systemd[1]: Started libpod-conmon-5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6.scope.
Jan 21 14:26:59 compute-0 podman[260936]: 2026-01-21 14:26:59.27791536 +0000 UTC m=+0.036744984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/851145430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:26:59 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/20496570' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 21 14:26:59 compute-0 ceph-mon[75031]: pgmap v1431: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/4110672913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 21 14:26:59 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/1611037400' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 21 14:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 21 14:26:59 compute-0 podman[260936]: 2026-01-21 14:26:59.406234767 +0000 UTC m=+0.165064381 container init 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:26:59 compute-0 podman[260936]: 2026-01-21 14:26:59.413198546 +0000 UTC m=+0.172028140 container start 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 21 14:26:59 compute-0 podman[260936]: 2026-01-21 14:26:59.419373327 +0000 UTC m=+0.178202931 container attach 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:26:59 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14718 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:26:59 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:26:59 compute-0 stupefied_nobel[261015]: --> passed data devices: 0 physical, 3 LVM
Jan 21 14:26:59 compute-0 stupefied_nobel[261015]: --> All data devices are unavailable
Jan 21 14:26:59 compute-0 systemd[1]: libpod-5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6.scope: Deactivated successfully.
Jan 21 14:26:59 compute-0 podman[260936]: 2026-01-21 14:26:59.98466971 +0000 UTC m=+0.743499304 container died 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 21 14:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbbe7c5b069dec1409fe47f8a13a21608ea99099e64d09e2ab5d45138764f622-merged.mount: Deactivated successfully.
Jan 21 14:27:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 21 14:27:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289026277' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 21 14:27:00 compute-0 podman[260936]: 2026-01-21 14:27:00.312857013 +0000 UTC m=+1.071686607 container remove 5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:27:00 compute-0 nova_compute[239261]: 2026-01-21 14:27:00.314 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:27:00 compute-0 sudo[260700]: pam_unix(sudo:session): session closed for user root
Jan 21 14:27:00 compute-0 systemd[1]: libpod-conmon-5011e75a3d64eb47868b2cacb93535bda94934d2ee7e4ac7f0f65e74a24f60f6.scope: Deactivated successfully.
Jan 21 14:27:00 compute-0 ceph-mon[75031]: from='client.14718 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:00 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/289026277' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 21 14:27:00 compute-0 sudo[261116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:27:00 compute-0 sudo[261116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:27:00 compute-0 sudo[261116]: pam_unix(sudo:session): session closed for user root
Jan 21 14:27:00 compute-0 sudo[261160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- lvm list --format json
Jan 21 14:27:00 compute-0 sudo[261160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:27:00 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:27:00 compute-0 podman[261210]: 2026-01-21 14:27:00.784613214 +0000 UTC m=+0.045743352 container create 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 21 14:27:00 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 21 14:27:00 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314800499' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 21 14:27:00 compute-0 podman[261210]: 2026-01-21 14:27:00.764096846 +0000 UTC m=+0.025227014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:27:00 compute-0 systemd[1]: Started libpod-conmon-72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247.scope.
Jan 21 14:27:00 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:27:00 compute-0 podman[261210]: 2026-01-21 14:27:00.930544229 +0000 UTC m=+0.191674377 container init 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:27:00 compute-0 podman[261210]: 2026-01-21 14:27:00.939893717 +0000 UTC m=+0.201023855 container start 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 21 14:27:00 compute-0 great_gauss[261233]: 167 167
Jan 21 14:27:00 compute-0 systemd[1]: libpod-72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247.scope: Deactivated successfully.
Jan 21 14:27:00 compute-0 podman[261210]: 2026-01-21 14:27:00.968359818 +0000 UTC m=+0.229489976 container attach 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 21 14:27:00 compute-0 podman[261210]: 2026-01-21 14:27:00.968711476 +0000 UTC m=+0.229841614 container died 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-496c8da67dd0b971724f0d80c1ec0d5ba3c945ba1c4ecb3c23e0193508af8ab2-merged.mount: Deactivated successfully.
Jan 21 14:27:01 compute-0 podman[261210]: 2026-01-21 14:27:01.043280868 +0000 UTC m=+0.304411006 container remove 72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:27:01 compute-0 systemd[1]: libpod-conmon-72345b383447d527f56d4ffa3ef708aa316d958c52022fc5ae3fd405ae035247.scope: Deactivated successfully.
Jan 21 14:27:01 compute-0 podman[261284]: 2026-01-21 14:27:01.239299 +0000 UTC m=+0.067678335 container create 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 21 14:27:01 compute-0 systemd[1]: Started libpod-conmon-35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059.scope.
Jan 21 14:27:01 compute-0 podman[261284]: 2026-01-21 14:27:01.194156723 +0000 UTC m=+0.022536078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:27:01 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:01 compute-0 podman[261284]: 2026-01-21 14:27:01.343766158 +0000 UTC m=+0.172145503 container init 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 21 14:27:01 compute-0 podman[261284]: 2026-01-21 14:27:01.356355004 +0000 UTC m=+0.184734339 container start 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:27:01 compute-0 podman[261284]: 2026-01-21 14:27:01.368988571 +0000 UTC m=+0.197367916 container attach 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 21 14:27:01 compute-0 ceph-mon[75031]: pgmap v1432: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:27:01 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3314800499' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 21 14:27:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 21 14:27:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/536238962' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 21 14:27:01 compute-0 stoic_beaver[261311]: {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:     "0": [
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:         {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "devices": [
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "/dev/loop3"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             ],
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_name": "ceph_lv0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_size": "21470642176",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=bb69e93d-312d-404f-89ad-65c71069da0f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "name": "ceph_lv0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "tags": {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.block_uuid": "38lI3m-8jEm-mhLS-314J-d4LG-geox-ZIkjgA",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cluster_name": "ceph",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.crush_device_class": "",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.encrypted": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.objectstore": "bluestore",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osd_fsid": "bb69e93d-312d-404f-89ad-65c71069da0f",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osd_id": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.type": "block",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.vdo": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.with_tpm": "0"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             },
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "type": "block",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "vg_name": "ceph_vg0"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:         }
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:     ],
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:     "1": [
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:         {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "devices": [
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "/dev/loop4"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             ],
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_name": "ceph_lv1",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_size": "21470642176",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e72716bc-fd8c-40ef-ada4-83584d595d05,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "name": "ceph_lv1",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "tags": {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.block_uuid": "0tvxDM-4iyh-rTAO-CKJ3-2PEY-vMf3-s1NWpw",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cluster_name": "ceph",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.crush_device_class": "",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.encrypted": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.objectstore": "bluestore",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osd_fsid": "e72716bc-fd8c-40ef-ada4-83584d595d05",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osd_id": "1",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.type": "block",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.vdo": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.with_tpm": "0"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             },
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "type": "block",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "vg_name": "ceph_vg1"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:         }
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:     ],
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:     "2": [
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:         {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "devices": [
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "/dev/loop5"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             ],
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_name": "ceph_lv2",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_size": "21470642176",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f0e9cad-f0a3-5869-9cc3-8d84d071866a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8d905f10-e78d-4894-96b3-7b33a725e1b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "lv_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "name": "ceph_lv2",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "tags": {
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.block_uuid": "7HV12y-YI80-luxn-QX5m-JjHn-IjMM-70rllU",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cephx_lockbox_secret": "",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cluster_fsid": "2f0e9cad-f0a3-5869-9cc3-8d84d071866a",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.cluster_name": "ceph",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.crush_device_class": "",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.encrypted": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.objectstore": "bluestore",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osd_fsid": "8d905f10-e78d-4894-96b3-7b33a725e1b7",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osd_id": "2",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.type": "block",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.vdo": "0",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:                 "ceph.with_tpm": "0"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             },
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "type": "block",
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:             "vg_name": "ceph_vg2"
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:         }
Jan 21 14:27:01 compute-0 stoic_beaver[261311]:     ]
Jan 21 14:27:01 compute-0 stoic_beaver[261311]: }
Jan 21 14:27:01 compute-0 systemd[1]: libpod-35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059.scope: Deactivated successfully.
Jan 21 14:27:01 compute-0 nova_compute[239261]: 2026-01-21 14:27:01.724 239265 DEBUG oslo_service.periodic_task [None req-ac982550-2320-466e-8873-bf8cd1f863a1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 21 14:27:01 compute-0 podman[261360]: 2026-01-21 14:27:01.733472095 +0000 UTC m=+0.043942598 container died 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 21 14:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-cefb0287876026d9a1a9cf3529597dafbfcd8c2dc6393cd08c418283a8fa8caa-merged.mount: Deactivated successfully.
Jan 21 14:27:01 compute-0 podman[261360]: 2026-01-21 14:27:01.80896755 +0000 UTC m=+0.119438083 container remove 35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_beaver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:27:01 compute-0 systemd[1]: libpod-conmon-35b4e3263fdde4e849e53464696cab8cf40c470e8e94a05973761543598e2059.scope: Deactivated successfully.
Jan 21 14:27:01 compute-0 sudo[261160]: pam_unix(sudo:session): session closed for user root
Jan 21 14:27:01 compute-0 sudo[261380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 21 14:27:01 compute-0 sudo[261380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:27:01 compute-0 sudo[261380]: pam_unix(sudo:session): session closed for user root
Jan 21 14:27:01 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 21 14:27:01 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784836232' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 21 14:27:01 compute-0 sudo[261410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2f0e9cad-f0a3-5869-9cc3-8d84d071866a/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2f0e9cad-f0a3-5869-9cc3-8d84d071866a -- raw list --format json
Jan 21 14:27:01 compute-0 sudo[261410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.255675572 +0000 UTC m=+0.052062276 container create ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 21 14:27:02 compute-0 systemd[1]: Started libpod-conmon-ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370.scope.
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.228757448 +0000 UTC m=+0.025144102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:27:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.368507444 +0000 UTC m=+0.164894128 container init ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.378058446 +0000 UTC m=+0.174445110 container start ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.382572975 +0000 UTC m=+0.178959679 container attach ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:27:02 compute-0 xenodochial_wing[261505]: 167 167
Jan 21 14:27:02 compute-0 systemd[1]: libpod-ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370.scope: Deactivated successfully.
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.384590504 +0000 UTC m=+0.180977128 container died ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 21 14:27:02 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/536238962' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 21 14:27:02 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/3784836232' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 21 14:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-627e7615f79b9ed44a2abe302fd646b597f8f03b9e5008988016f29a369f6211-merged.mount: Deactivated successfully.
Jan 21 14:27:02 compute-0 podman[261486]: 2026-01-21 14:27:02.449174974 +0000 UTC m=+0.245561598 container remove ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:27:02 compute-0 systemd[1]: libpod-conmon-ddc936b8d73bb13983127141d95bf4637488624c1194602d99fd3b5720b39370.scope: Deactivated successfully.
Jan 21 14:27:02 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14728 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:02 compute-0 podman[261541]: 2026-01-21 14:27:02.612218824 +0000 UTC m=+0.048027977 container create 68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:27:02 compute-0 systemd[1]: Started libpod-conmon-68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300.scope.
Jan 21 14:27:02 compute-0 systemd[1]: Started libcrun container.
Jan 21 14:27:02 compute-0 podman[261541]: 2026-01-21 14:27:02.592127496 +0000 UTC m=+0.027936669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 21 14:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da48322a14a78077b46a47f46876c839e6947b75ea1c6c14e9916b043eda1100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da48322a14a78077b46a47f46876c839e6947b75ea1c6c14e9916b043eda1100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da48322a14a78077b46a47f46876c839e6947b75ea1c6c14e9916b043eda1100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:02 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da48322a14a78077b46a47f46876c839e6947b75ea1c6c14e9916b043eda1100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 21 14:27:02 compute-0 podman[261541]: 2026-01-21 14:27:02.711477586 +0000 UTC m=+0.147286749 container init 68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:27:02 compute-0 podman[261541]: 2026-01-21 14:27:02.718676531 +0000 UTC m=+0.154485714 container start 68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lumiere, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 21 14:27:02 compute-0 podman[261541]: 2026-01-21 14:27:02.734254349 +0000 UTC m=+0.170063592 container attach 68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 21 14:27:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 21 14:27:03 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492460273' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 21 14:27:03 compute-0 lvm[261702]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:27:03 compute-0 lvm[261702]: VG ceph_vg0 finished
Jan 21 14:27:03 compute-0 lvm[261703]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 21 14:27:03 compute-0 lvm[261703]: VG ceph_vg1 finished
Jan 21 14:27:03 compute-0 lvm[261705]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 21 14:27:03 compute-0 lvm[261705]: VG ceph_vg2 finished
Jan 21 14:27:03 compute-0 ceph-mon[75031]: from='client.14728 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:03 compute-0 ceph-mon[75031]: pgmap v1433: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:27:03 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/492460273' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 21 14:27:03 compute-0 lvm[261711]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 21 14:27:03 compute-0 lvm[261711]: VG ceph_vg0 finished
Jan 21 14:27:03 compute-0 practical_lumiere[261578]: {}
Jan 21 14:27:03 compute-0 systemd[1]: libpod-68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300.scope: Deactivated successfully.
Jan 21 14:27:03 compute-0 systemd[1]: libpod-68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300.scope: Consumed 1.270s CPU time.
Jan 21 14:27:03 compute-0 podman[261541]: 2026-01-21 14:27:03.523450102 +0000 UTC m=+0.959259255 container died 68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:27:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-da48322a14a78077b46a47f46876c839e6947b75ea1c6c14e9916b043eda1100-merged.mount: Deactivated successfully.
Jan 21 14:27:03 compute-0 podman[261541]: 2026-01-21 14:27:03.611309307 +0000 UTC m=+1.047118460 container remove 68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 21 14:27:03 compute-0 systemd[1]: libpod-conmon-68e5ec6902a5297502605e7d094586f69ddfb8632db3fccb450273efa0a72300.scope: Deactivated successfully.
Jan 21 14:27:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 21 14:27:03 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605465264' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 21 14:27:03 compute-0 sudo[261410]: pam_unix(sudo:session): session closed for user root
Jan 21 14:27:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 21 14:27:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:27:03 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 21 14:27:03 compute-0 ceph-mon[75031]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:27:03 compute-0 sudo[261743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 21 14:27:03 compute-0 sudo[261743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 21 14:27:03 compute-0 sudo[261743]: pam_unix(sudo:session): session closed for user root
Jan 21 14:27:04 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14734 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:04 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2605465264' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 21 14:27:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:27:04 compute-0 ceph-mon[75031]: from='mgr.14122 192.168.122.100:0/2095816634' entity='mgr.compute-0.tnwklj' 
Jan 21 14:27:04 compute-0 ceph-mon[75031]: from='client.14734 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 21 14:27:04 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015041558' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 21 14:27:04 compute-0 ceph-mgr[75322]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:27:04 compute-0 ceph-mon[75031]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 21 14:27:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14738 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:05 compute-0 ceph-mon[75031]: from='client.? 192.168.122.100:0/2015041558' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 21 14:27:05 compute-0 ceph-mon[75031]: pgmap v1434: 305 pgs: 305 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Jan 21 14:27:05 compute-0 ceph-mon[75031]: from='client.14738 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:05 compute-0 ceph-mgr[75322]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 21 14:27:06 compute-0 ceph-mon[75031]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 21 14:27:06 compute-0 ceph-mon[75031]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774545664' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
